{"name": "train_C-41", "title": "Evaluating Adaptive Resource Management for Distributed Real-Time Embedded Systems", "abstract": "A challenging problem faced by researchers and developers of distributed real-time and embedded (DRE) systems is devising and implementing effective adaptive resource management strategies that can meet end-to-end quality of service (QoS) requirements in varying operational conditions. This paper presents two contributions to research in adaptive resource management for DRE systems. First, we describe the structure and functionality of the Hybrid Adaptive Resourcemanagement Middleware (HyARM), which provides adaptive resource management using hybrid control techniques for adapting to workload fluctuations and resource availability. Second, we evaluate the adaptive behavior of HyARM via experiments on a DRE multimedia system that distributes video in real-time. Our results indicate that HyARM yields predictable, stable, and high system performance, even in the face of fluctuating workload and resource availability.", "fulltext": "1. INTRODUCTION\nAchieving end-to-end real-time quality of service (QoS)\nis particularly important for open distributed real-time and\nembedded (DRE) systems that face resource constraints, such\nas limited computing power and network bandwidth.\nOverutilization of these system resources can yield unpredictable\nand unstable behavior, whereas under-utilization can yield\nexcessive system cost. A promising approach to meeting\nthese end-to-end QoS requirements effectively, therefore, is\nto develop and apply adaptive middleware [10, 15], which is\nsoftware whose functional and QoS-related properties can be\nmodified either statically or dynamically. Static\nmodifications are carried out to reduce footprint, leverage\ncapabilities that exist in specific platforms, enable functional\nsubsetting, and/or minimize hardware/software infrastructure\ndependencies. Objectives of dynamic modifications include\noptimizing system responses to changing environments or\nrequirements, such as changing component interconnections,\npower-levels, CPU and network bandwidth availability,\nlatency/jitter, and workload.\nIn open DRE systems, adaptive middleware must make\nsuch modifications dependably, i.e., while meeting\nstringent end-to-end QoS requirements, which requires the\nspecification and enforcement of upper and lower bounds on\nsystem resource utilization to ensure effective use of\nsystem resources. To meet these requirements, we have\ndeveloped the Hybrid Adaptive Resource-management\nMiddleware (HyARM), which is an open-source1\ndistributed\nresource management middleware.\nHyARM is based on hybrid control theoretic techniques [8],\nwhich provide a theoretical framework for designing\ncontrol of complex system with both continuous and discrete\ndynamics. In our case study, which involves a distributed\nreal-time video distribution system, the task of adaptive\nresource management is to control the utilization of the\ndifferent resources, whose utilizations are described by\ncontinuous variables. We achieve this by adapting the resolution\nof the transmitted video, which is modeled as a continuous\nvariable, and by changing the frame-rate and the\ncompression, which are modeled by discrete actions. We have\nimplemented HyARM atop The ACE ORB (TAO) [13], which\nis an implementation of the Real-time CORBA\nspecification [12]. Our results show that (1) HyARM ensures\neffective system resource utilization and (2) end-to-end QoS\nrequirements of higher priority applications are met, even in\nthe face of fluctuations in workload.\nThe remainder of the paper is organized as follows:\nSection 2 describes the architecture, functionality, and resource\nutilization model of our DRE multimedia system case study;\nSection 3 explains the structure and functionality of HyARM;\nSection 4 evaluates the adaptive behavior of HyARM via\nexperiments on our multimedia system case study; Section 5\ncompares our research on HyARM with related work; and\nSection 6 presents concluding remarks.\n1\nThe code and examples for HyARM are available at www.\ndre.vanderbilt.edu/\u223cnshankar/HyARM/.\nArticle 7\n2. CASE STUDY: DRE MULTIMEDIA\nSYSTEM\nThis section describes the architecture and QoS\nrequirements of our DRE multimedia system.\n2.1 Multimedia System Architecture\nWireless Link\nWireless Link\nWireless\nLink\n`\n`\n`\nPhysical Link\nPhysical Link\nPhysical Link\nBase Station\nEnd Receiver\nEnd Receiver\nEnd Receiver`\nPhysical Link\nEnd Receiver\nUAV\nCamera\nVideo\nEncoder\nCamera\nVideo\nEncoder\nCamera\nVideo\nEncoder\nUAV\nCamera\nVideo\nEncoder\nCamera\nVideo\nEncoder\nCamera\nVideo\nEncoder\nUAV\nCamera\nVideo\nEncoder\nCamera\nVideo\nEncoder\nCamera\nVideo\nEncoder\nFigure 1: DRE Multimedia System Architecture\nThe architecture for our DRE multimedia system is shown\nin Figure 1 and consists of the following entities: (1)Data\nsource (video capture by UAV), where video is captured\n(related to subject of interest) by camera(s) on each UAV,\nfollowed by encoding of raw video using a specific encoding\nscheme and transmitting the video to the next stage in the\npipeline. (2)Data distributor (base station), where the\nvideo is processed to remove noise, followed by\nretransmission of the processed video to the next stage in the pipeline.\n(3) Sinks (command and control center), where the\nreceived video is again processed to remove noise, then\ndecoded and finally rendered to end user via graphical displays.\nSignificant improvements in video encoding/decoding and\n(de)compression techniques have been made as a result of\nrecent advances in video encoding and compression\ntechniques [14]. Common video compression schemes are\nMPEG1, MPEG-2, Real Video, and MPEG-4. Each compression\nscheme is characterized by its resource requirement, e.g., the\ncomputational power to (de)compress the video signal and\nthe network bandwidth required to transmit the compressed\nvideo signal. Properties of the compressed video, such as\nresolution and frame-rate determine both the quality and the\nresource requirements of the video.\nOur multimedia system case study has the following\nendto-end real-time QoS requirements: (1) latency, (2)\ninterframe delay (also know as jitter), (3) frame rate, and (4)\npicture resolution. These QoS requirements can be\nclassified as being either hard or soft. Hard QoS requirements\nshould be met by the underlying system at all times, whereas\nsoft QoS requirements can be missed occasionally.2\nFor our\ncase study, we treat QoS requirements such as latency and\njitter as harder QoS requirements and strive to meet these\nrequirements at all times. In contrast, we treat QoS\nrequirements such as video frame rate and picture resolution as\nsofter QoS requirements and modify these video properties\nadaptively to handle dynamic changes in resource\navailabil2\nAlthough hard and soft are often portrayed as two discrete\nrequirement sets, in practice they are usually two ends of\na continuum ranging from softer to harder rather than\ntwo disjoint points.\nity effectively.\n2.2 DRE Multimedia System Rresources\nThere are two primary types of resources in our DRE\nmultimedia system: (1) processors that provide\ncomputational power available at the UAVs, base stations, and end\nreceivers and (2) network links that provide communication\nbandwidth between UAVs, base stations, and end receivers.\nThe computing power required by the video capture and\nencoding tasks depends on dynamic factors, such as speed\nof the UAV, speed of the subject (if the subject is mobile),\nand distance between UAV and the subject. The wireless\nnetwork bandwidth available to transmit video captured by\nUAVs to base stations also depends on the wireless\nconnectivity between the UAVs and the base station, which in-turn\ndepend on dynamic factors such as the speed of the UAVs\nand the relative distance between UAVs and base stations.\nThe bandwidth of the link between the base station and\nthe end receiver is limited, but more stable than the\nbandwidth of the wireless network. Resource requirements and\navailability of resources are subjected to dynamic changes.\nTwo classes of applications - QoS-enabled and best-effort\n- use the multimedia system infrastructure described above\nto transmit video to their respective receivers. QoS-enabled\nclass of applications have higher priority over best-effort\nclass of application. In our study, emergency response\napplications belong to QoS-enabled and surveillance applications\nbelong to best-effort class. For example, since a stream from\nan emergency response application is of higher importance\nthan a video stream from a surveillance application, it\nreceives more resources end-to-end.\nSince resource availability significantly affects QoS, we use\ncurrent resource utilization as the primary indicator of\nsystem performance. We refer to the current level of system\nresource utilization as the system condition. Based on this\ndefinition, we can classify system conditions as being either\nunder, over, or effectively utilized.\nUnder-utilization of system resources occurs when the\ncurrent resource utilization is lower than the desired lower bound\non resource utilization. In this system condition, residual\nsystem resources (i.e., network bandwidth and\ncomputational power) are available in large amounts after meeting\nend-to-end QoS requirements of applications. These\nresidual resources can be used to increase the QoS of the\napplications. For example, residual CPU and network bandwidth\ncan be used to deliver better quality video (e.g., with greater\nresolution and higher frame rate) to end receivers.\nOver-utilization of system resources occurs when the\ncurrent resource utilization is higher than the desired upper\nbound on resource utilization. This condition can arise\nfrom loss of resources - network bandwidth and/or\ncomputing power at base station, end receiver or at UAV - or\nmay be due to an increase in resource demands by\napplications. Over-utilization is generally undesirable since the\nquality of the received video (such as resolution and frame\nrate) and timeliness properties (such as latency and jitter)\nare degraded and may result in an unstable (and thus\nineffective) system.\nEffective resource utilization is the desired system\ncondition since it ensures that end-to-end QoS requirements of\nthe UAV-based multimedia system are met and utilization of\nboth system resources, i.e., network bandwidth and\ncomputational power, are within their desired utilization bounds.\nArticle 7\nSection 3 describes techniques we applied to achieve effective\nutilization, even in the face of fluctuating resource\navailability and/or demand.\n3. OVERVIEW OF HYARM\nThis section describes the architecture of the Hybrid\nAdaptive Resource-management Middleware (HyARM). HyARM\nensures efficient and predictable system performance by\nproviding adaptive resource management, including monitoring\nof system resources and enforcing bounds on application\nresource utilization.\n3.1 HyARM Structure and Functionality\nResource Utilization\nLegend\nResource Allocation\nApplication Parameters\nFigure 2: HyARM Architecture\nHyARM is composed of three types of entities shown in\nFigure 2 and described below:\nResource monitors observe the overall resource\nutilization for each type of resource and resource utilization per\napplication. In our multimedia system, there are resource\nmonitors for CPU utilization and network bandwidth. CPU\nmonitors observe the CPU resource utilization of UAVs, base\nstation, and end receivers. Network bandwidth monitors\nobserve the network resource utilization of (1) wireless network\nlink between UAVs and the base station and (2) wired\nnetwork link between the base station and end receivers.\nThe central controller maintains the system resource\nutilization below a desired bound by (1) processing periodic\nupdates it receives from resource monitors and (2)\nmodifying the execution of applications accordingly, e.g., by\nusing different execution algorithms or operating the\napplication with increased/decreased QoS. This adaptation\nprocess ensures that system resources are utilized efficiently and\nend-to-end application QoS requirements are met. In our\nmultimedia system, the HyARM controller determines the\nvalue of application parameters such as (1) video\ncompression schemes, such as Real Video and MPEG-4, and/or (2)\nframe rate, and (3) picture resolution. From the perspective\nof hybrid control theoretic techniques [8], the different video\ncompression schemes and frame rate form the discrete\nvariables of application execution and picture resolution forms\nthe continuous variables.\nApplication adapters modify application execution\naccording to parameters recommended by the controller and\nensures that the operation of the application is in accordance\nwith the recommended parameters. In the current\nmplementation of HyARM, the application adapter modifies the\ninput parameters to the application that affect application\nQoS and resource utilization - compression scheme, frame\nrate, and picture resolution. In our future implementations,\nwe plan to use resource reservation mechanisms such as\nDifferentiated Service [7, 3] and Class-based Kernel Resource\nManagement [4] to provision/reserve network and CPU\nresources. In our multimedia system, the application adapter\nensures that the video is encoded at the recommended frame\nrate and resolution using the specified compression scheme.\n3.2 Applying HyARM to the Multimedia\nSystem Case Study\nHyARM is built atop TAO [13], a widely used open-source\nimplementation of Real-time CORBA [12]. HyARM can be\napplied to ensure efficient, predictable and adaptive resource\nmanagement of any DRE system where resource availability\nand requirements are subject to dynamic change.\nFigure 3 shows the interaction of various parts of the\nDRE multimedia system developed with HyARM, TAO,\nand TAO\"s A/V Streaming Service. TAO\"s A/V Streaming\nservice is an implementation of the CORBA A/V\nStreaming Service specification. TAO\"s A/V Streaming Service is\na QoS-enabled video distribution service that can transfer\nvideo in real-time to one or more receivers. We use the A/V\nStreaming Service to transmit the video from the UAVs to\nthe end receivers via the base station. Three entities of\nReceiver\nUAV\nTAO\nResource\nUtilization\nHyARM\nCentral\nController\nA/V Streaming\nService : Sender\nMPEG1\nMPEG4\nReal\nVideo\nHyARM\nResource\nMonitor\nA/V Streaming\nService : Receiver\nCompressed\nVideo Compressed\nVideo\nApplication\nHyARM\nApplication\nAdapter\nRemote Object Call\nControl\nInputs Resource\nUtilization\nResource\nUtilization /\nControl Inputs\nControl\nInputs\nLegend\nFigure 3: Developing the DRE Multimedia System\nwith HyARM\nHyARM, namely the resource monitors, central controller,\nand application adapters are built as CORBA servants, so\nthey can be distributed throughout a DRE system.\nResource monitors are remote CORBA objects that update\nthe central controller periodically with the current resource\nutilization. Application adapters are collocated with\napplications since the two interact closely.\nAs shown in Figure 3, UAVs compress the data using\nvarious compression schemes, such as MPEG1, MPEG4, and\nReal Video, and uses TAO\"s A/V streaming service to\ntransmit the video to end receivers. HyARM\"s resource monitors\ncontinuously observe the system resource utilization and\nnotify the central controller with the current utilization. 3\nThe interaction between the controller and the resource\nmonitors uses the Observer pattern [5]. When the controller\nreceives resource utilization updates from monitors, it\ncomputes the necessary modifications to application(s)\nparameters and notifies application adapter(s) via a remote\noperation call. Application adapter(s), that are collocated with\nthe application, modify the input parameters to the\napplication - in our case video encoder - to modify the application\nresource utilization and QoS.\n3\nThe base station is not included in the figure since it only\nretransmits the video received from UAVs to end receivers.\nArticle 7\n4. PERFORMANCE RESULTS AND\nANALYSIS\nThis section first describes the testbed that provides the\ninfrastructure for our DRE multimedia system, which was\nused to evaluate the performance of HyARM. We then\ndescribe our experiments and analyze the results obtained to\nempirically evaluate how HyARM behaves during\nunderand over-utilization of system resources.\n4.1 Overview of the Hardware and Software\nTestbed\nOur experiments were performed on the Emulab testbed\nat University of Utah. The hardware configuration consists\nof two nodes acting as UAVs, one acting as base station,\nand one as end receiver. Video from the two UAVs were\ntransmitted to a base station via a LAN configured with\nthe following properties: average packet loss ratio of 0.3 and\nbandwidth 1 Mbps. The network bandwidth was chosen to\nbe 1 Mbps since each UAV in the DRE multimedia system\nis allocated 250 Kbps. These parameters were chosen to\nemulate an unreliable wireless network with limited bandwidth\nbetween the UAVs and the base station. From the base\nstation, the video was retransmitted to the end receiver via a\nreliable wireline link of 10 Mbps bandwidth with no packet\nloss.\nThe hardware configuration of all the nodes was chosen as\nfollows: 600 MHz Intel Pentium III processor, 256 MB\nphysical memory, 4 Intel EtherExpress Pro 10/100 Mbps Ethernet\nports, and 13 GB hard drive. A real-time version of Linux\n- TimeSys Linux/NET 3.1.214 based on RedHat Linux\n9was used as the operating system for all nodes. The\nfollowing software packages were also used for our experiments: (1)\nFfmpeg 0.4.9-pre1, which is an open-source library (http:\n//www.ffmpeg.sourceforge.net/download.php) that\ncompresses video into MPEG-2, MPEG-4, Real Video, and many\nother video formats. (2) Iftop 0.16, which is an\nopensource library (http://www.ex-parrot.com/\u223cpdw/iftop/)\nwe used for monitoring network activity and bandwidth\nutilization. (3) ACE 5.4.3 + TAO 1.4.3, which is an\nopensource (http://www.dre.vanderbilt.edu/TAO)\nimplementation of the Real-time CORBA [12] specification upon which\nHyARM is built. TAO provides the CORBA Audio/Video\n(A/V) Streaming Service that we use to transmit the video\nfrom the UAVs to end receivers via the base station.\n4.2 Experiment Configuration\nOur experiment consisted of two (emulated) UAVs that\nsimultaneously send video to the base station using the\nexperimentation setup described in Section 4.1. At the base\nstation, video was retransmitted to the end receivers (without\nany modifications), where it was stored to a file. Each UAV\nhosted two applications, one QoS-enabled application\n(emergency response), and one best-effort application\n(surveillance). Within each UAV, computational power is shared\nbetween the applications, while the network bandwidth is\nshared among all applications.\nTo evaluate the QoS provided by HyARM, we monitored\nCPU utilization at the two UAVs, and network bandwidth\nutilization between the UAV and the base station. CPU\nresource utilization was not monitored at the base station and\nthe end receiver since they performed no\ncomputationallyintensive operations. The resource utilization of the 10 Mpbs\nphysical link between the base station and the end receiver\ndoes not affect QoS of applications and is not monitored by\nHyARM since it is nearly 10 times the 1 MB bandwidth\nof the LAN between the UAVs and the base station. The\nexperiment also monitors properties of the video that affect\nthe QoS of the applications, such as latency, jitter, frame\nrate, and resolution.\nThe set point on resource utilization for each resource was\nspecified at 0.69, which is the upper bound typically\nrecommended by scheduling techniques, such as rate monotonic\nalgorithm [9]. Since studies [6] have shown that human eyes\ncan perceive delays more than 200ms, we use this as the\nupper bound on jitter of the received video. QoS\nrequirements for each class of application is specified during system\ninitialization and is shown in Table 1.\n4.3 Empirical Results and Analysis\nThis section presents the results obtained from running\nthe experiment described in Section 4.2 on our DRE\nmultimedia system testbed. We used system resource utilization\nas a metric to evaluate the adaptive resource management\ncapabilities of HyARM under varying input work loads. We\nalso used application QoS as a metric to evaluate HyARM\"s\ncapabilities to support end-to-end QoS requirements of the\nvarious classes of applications in the DRE multimedia\nsystem. We analyze these results to explain the significant\ndifferences in system performance and application QoS.\nComparison of system performance is decomposed into\ncomparison of resource utilization and application QoS. For\nsystem resource utilization, we compare (1) network\nbandwidth utilization of the local area network and (2) CPU\nutilization at the two UAV nodes. For application QoS, we\ncompare mean values of video parameters, including (1)\npicture resolution, (2) frame rate, (3) latency, and (4) jitter.\nComparison of resource utilization. Over-utilization\nof system resources in DRE systems can yield an unstable\nsystem. In contrast, under-utilization of system resources\nincreases system cost. Figure 4 and Figure 5 compare the\nsystem resource utilization with and without HyARM.\nFigure 4 shows that HyARM maintains system utilization close\nto the desired utilization set point during fluctuation in\ninput work load by transmitting video of higher (or lower) QoS\nfor QoS-enabled (or best-effort) class of applications during\nover (or under) utilization of system resources.\nFigure 5 shows that without HyARM, network\nutilization was as high as 0.9 during increase in workload\nconditions, which is greater than the utilization set point of 0.7\nby 0.2. As a result of over-utilization of resources, QoS of\nthe received video, such as average latency and jitter, was\naffected significantly. Without HyARM, system resources\nwere either under-utilized or over-utilized, both of which\nare undesirable. In contrast, with HyARM, system resource\nutilization is always close to the desired set point, even\nduring fluctuations in application workload. During\nsudden fluctuation in application workload, system conditions\nmay be temporarily undesirable, but are restored to the\ndesired condition within several sampling periods. Temporary\nover-utilization of resources is permissible in our multimedia\nsystem since the quality of the video may be degraded for\na short period of time, though application QoS will be\ndegraded significantly if poor quality video is transmitted for\na longer period of time.\nComparison of application QoS. Figures 6, Figure 7,\nand Table 2 compare latency, jitter, resolution, and\nframeArticle 7\nClass Resolution Frame Rate Latency (msec ) Jitter (msec)\nQoS Enabled 1024 x 768 25 200 200\nBest-effort 320 x 240 15 300 250\nTable 1: Application QoS Requirements\nFigure 4: Resource utilization with HyARM Figure 5: Resource utilization without HyARM\nrate of the received video, respectively. Table 2 shows that\nHyARM increases the resolution and frame video of\nQoSenabled applications, but decreases the resolution and frame\nrate of best effort applications. During over utilization of\nsystem resources, resolution and frame rate of lower priority\napplications are reduced to adapt to fluctuations in\napplication workload and to maintain the utilization of resources\nat the specified set point.\nIt can be seen from Figure 6 and Figure 7 that HyARM\nreduces the latency and jitter of the received video\nsignificantly. These figures show that the QoS of QoS-enabled\napplications is greatly improved by HyARM. Although\napplication parameters, such as frame rate and resolutions,\nwhich affect the soft QoS requirements of best-effort\napplications may be compromised, the hard QoS requirements,\nsuch as latency and jitter, of all applications are met.\nHyARM responds to fluctuation in resource availability\nand/or demand by constant monitoring of resource\nutilization. As shown in Figure 4, when resources utilization\nincreases above the desired set point, HyARM lowers the\nutilization by reducing the QoS of best-effort applications. This\nadaptation ensures that enough resources are available for\nQoS-enabled applications to meet their QoS needs.\nFigures 6 and 7 show that the values of latency and jitter of\nthe received video of the system with HyARM are nearly half\nof the corresponding value of the system without HyARM.\nWith HyARM, values of these parameters are well below\nthe specified bounds, whereas without HyARM, these value\nare significantly above the specified bounds due to\noverutilization of the network bandwidth, which leads to network\ncongestion and results in packet loss. HyARM avoids this\nby reducing video parameters such as resolution, frame-rate,\nand/or modifying the compression scheme used to compress\nthe video.\nOur conclusions from analyzing the results described above\nare that applying adaptive middleware via hybrid control to\nDRE system helps to (1) improve application QoS, (2)\nincrease system resource utilization, and (3) provide better\npredictability (lower latency and inter-frame delay) to\nQoSenabled applications. These improvements are achieved largely\ndue to monitoring of system resource utilization, efficient\nsystem workload management, and adaptive resource\nprovisioning by means of HyARM\"s network/CPU resource\nmonitors, application adapter, and central controller,\nrespectively.\n5. RELATED WORK\nA number of control theoretic approaches have been\napplied to DRE systems recently. These techniques aid in\novercoming limitations with traditional scheduling approaches\nthat handle dynamic changes in resource availability poorly\nand result in a rigidly scheduled system that adapts poorly\nto change. A survey of these techniques is presented in [1].\nOne such approach is feedback control scheduling (FCS) [2,\n11]. FCS algorithms dynamically adjust resource allocation\nby means of software feedback control loops. FCS\nalgorithms are modeled and designed using rigorous\ncontroltheoretic methodologies. These algorithms provide robust\nand analytical performance assurances despite uncertainties\nin resource availability and/or demand. Although existing\nFCS algorithms have shown promise, these algorithms often\nassume that the system has continuous control variable(s)\nthat can continuously be adjusted. While this assumption\nholds for certain classes of systems, there are many classes\nof DRE systems, such as avionics and total-ship computing\nenvironments that only support a finite a priori set of\ndiscrete configurations. The control variables in such systems\nare therefore intrinsically discrete.\nHyARM handles both continuous control variables, such\nas picture resolution, and discrete control variable, such as\ndiscrete set of frame rates. HyARM can therefore be applied\nto system that support continuous and/or discrete set of\ncontrol variables. The DRE multimedia system as described\nin Section 2 is an example DRE system that offers both\ncontinuous (picture resolution) and discrete set (frame-rate) of\ncontrol variables. These variables are modified by HyARM\nto achieve efficient resource utilization and improved\napplication QoS.\n6. CONCLUDING REMARKS\nArticle 7\nFigure 6: Comparison of Video Latency Figure 7: Comparison of Video Jitter\nSource Picture Size / Frame Rate\nWith HyARM Without HyARM\nUAV1 QoS Enabled Application 1122 X 1496 / 25 960 X 720 / 20\nUAV1 Best-effort Application 288 X 384 / 15 640 X 480 / 20\nUAV2 QoS Enabled Application 1126 X 1496 / 25 960 X 720 / 20\nUAV2 Best-effort Application 288 X 384 / 15 640 X 480 / 20\nTable 2: Comparison of Video Quality\nMany distributed real-time and embedded (DRE) systems\ndemand end-to-end quality of service (QoS) enforcement\nfrom their underlying platforms to operate correctly. These\nsystems increasingly run in open environments, where\nresource availability is subject to dynamic change. To meet\nend-to-end QoS in dynamic environments, DRE systems can\nbenefit from an adaptive middleware that monitors system\nresources, performs efficient application workload\nmanagement, and enables efficient resource provisioning for\nexecuting applications.\nThis paper described HyARM, an adaptive middleware,\nthat provides effective resource management to DRE\nsystems. HyARM employs hybrid control techniques to\nprovide the adaptive middleware capabilities, such as resource\nmonitoring and application adaptation that are key to\nproviding the dynamic resource management capabilities for\nopen DRE systems. We employed HyARM to a\nrepresentative DRE multimedia system that is implemented using\nReal-time CORBA and CORBA A/V Streaming Service.\nWe evaluated the performance of HyARM in a system\ncomposed of three distributed resources and two classes of\napplications with two applications each. Our empirical\nresults indicate that HyARM ensures (1) efficient resource\nutilization by maintaining the resource utilization of system\nresources within the specified utilization bounds, (2) QoS\nrequirements of QoS-enabled applications are met at all times.\nOverall, HyARM ensures efficient, predictable, and adaptive\nresource management for DRE systems.\n7. REFERENCES\n[1] T. F. Abdelzaher, J. Stankovic, C. Lu, R. Zhang, and Y. Lu.\nFeddback Performance Control in Software Services. IEEE:\nControl Systems, 23(3), June 2003.\n[2] L. Abeni, L. Palopoli, G. Lipari, and J. Walpole. Analysis of a\nreservation-based feedback scheduler. In IEEE Real-Time\nSystems Symposium, Dec. 2002.\n[3] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and\nW. Weiss. An architecture for differentiated services. Network\nInformation Center RFC 2475, Dec. 1998.\n[4] H. Franke, S. Nagar, C. Seetharaman, and V. Kashyap.\nEnabling Autonomic Workload Management in Linux. In\nProceedings of the International Conference on Autonomic\nComputing (ICAC), New York, New York, May 2004. IEEE.\n[5] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design\nPatterns: Elements of Reusable Object-Oriented Software.\nAddison-Wesley, Reading, MA, 1995.\n[6] G. Ghinea and J. P. Thomas. Qos impact on user perception\nand understanding of multimedia video clips. In\nMULTIMEDIA \"98: Proceedings of the sixth ACM\ninternational conference on Multimedia, pages 49-54, Bristol,\nUnited Kingdom, 1998. ACM Press.\n[7] Internet Engineering Task Force. Differentiated Services\nWorking Group (diffserv) Charter.\nwww.ietf.org/html.charters/diffserv-charter.html, 2000.\n[8] X. Koutsoukos, R. Tekumalla, B. Natarajan, and C. Lu. Hybrid\nSupervisory Control of Real-Time Systems. In 11th IEEE\nReal-Time and Embedded Technology and Applications\nSymposium, San Francisco, California, Mar. 2005.\n[9] J. Lehoczky, L. Sha, and Y. Ding. The Rate Monotonic\nScheduling Algorithm: Exact Characterization and Average\nCase Behavior. In Proceedings of the 10th IEEE Real-Time\nSystems Symposium (RTSS 1989), pages 166-171. IEEE\nComputer Society Press, 1989.\n[10] J. Loyall, J. Gossett, C. Gill, R. Schantz, J. Zinky, P. Pal,\nR. Shapiro, C. Rodrigues, M. Atighetchi, and D. Karr.\nComparing and Contrasting Adaptive Middleware Support in\nWide-Area and Embedded Distributed Object Applications. In\nProceedings of the 21st International Conference on\nDistributed Computing Systems (ICDCS-21), pages 625-634.\nIEEE, Apr. 2001.\n[11] C. Lu, J. A. Stankovic, G. Tao, and S. H. Son. Feedback\nControl Real-Time Scheduling: Framework, Modeling, and\nAlgorithms. Real-Time Systems Journal, 23(1/2):85-126, July\n2002.\n[12] Object Management Group. Real-time CORBA Specification,\nOMG Document formal/02-08-02 edition, Aug. 2002.\n[13] D. C. Schmidt, D. L. Levine, and S. Mungee. The Design and\nPerformance of Real-Time Object Request Brokers. Computer\nCommunications, 21(4):294-324, Apr. 1998.\n[14] Thomas Sikora. Trends and Perspectives in Image and Video\nCoding. In Proceedings of the IEEE, Jan. 2005.\n[15] X. Wang, H.-M. Huang, V. Subramonian, C. Lu, and C. Gill.\nCAMRIT: Control-based Adaptive Middleware for Real-time\nImage Transmission. In Proc. of the 10th IEEE Real-Time and\nEmbedded Tech. and Applications Symp. (RTAS), Toronto,\nCanada, May 2004.\nArticle 7", "keywords": "real-time video distribution system;dynamic environment;hybrid system;video encoding/decoding;quality of service;streaming service;hybrid adaptive resourcemanagement middleware;distributed real-time embedded system;distribute real-time embed system;adaptive resource management;service end-to-end quality;hybrid control technique;service quality;real-time corba specification;end-to-end quality of service;resource reservation mechanism"} {"name": "train_C-42", "title": "Demonstration of Grid-Enabled Ensemble Kalman Filter Data Assimilation Methodology for Reservoir Characterization", "abstract": "Ensemble Kalman filter data assimilation methodology is a popular approach for hydrocarbon reservoir simulations in energy exploration. In this approach, an ensemble of geological models and production data of oil fields is used to forecast the dynamic response of oil wells. The Schlumberger ECLIPSE software is used for these simulations. Since models in the ensemble do not communicate, message-passing implementation is a good choice. Each model checks out an ECLIPSE license and therefore, parallelizability of reservoir simulations depends on the number licenses available. We have Grid-enabled the ensemble Kalman filter data assimilation methodology for the TIGRE Grid computing environment. By pooling the licenses and computing resources across the collaborating institutions using GridWay metascheduler and TIGRE environment, the computational accuracy can be increased while reducing the simulation runtime. In this paper, we provide an account of our efforts in Gridenabling the ensemble Kalman Filter data assimilation methodology. Potential benefits of this approach, observations and lessons learned will be discussed.", "fulltext": "1. INTRODUCTION\nGrid computing [1] is an emerging collaborative\ncomputing paradigm to extend institution/organization\nspecific high performance computing (HPC) capabilities\ngreatly beyond local resources. Its importance stems from\nthe fact that ground breaking research in strategic\napplication areas such as bioscience and medicine, energy\nexploration and environmental modeling involve strong\ninterdisciplinary components and often require intercampus\ncollaborations and computational capabilities beyond\ninstitutional limitations.\nThe Texas Internet Grid for Research and Education\n(TIGRE) [2,3] is a state funded cyberinfrastructure\ndevelopment project carried out by five (Rice, A&M, TTU,\nUH and UT Austin) major university systems - collectively\ncalled TIGRE Institutions. The purpose of TIGRE is to\ncreate a higher education Grid to sustain and extend\nresearch and educational opportunities across Texas.\nTIGRE is a project of the High Performance Computing\nacross Texas (HiPCAT) [4] consortium. The goal of\nHiPCAT is to support advanced computational technologies\nto enhance research, development, and educational\nactivities.\nThe primary goal of TIGRE is to design and deploy\nstate-of-the-art Grid middleware that enables integration of\ncomputing systems, storage systems and databases,\nvisualization laboratories and displays, and even\ninstruments and sensors across Texas. The secondary goal\nis to demonstrate the TIGRE capabilities to enhance\nresearch and educational opportunities in strategic\napplication areas of interest to the State of Texas. These are\nbioscience and medicine, energy exploration and air quality\nmodeling. Vision of the TIGRE project is to foster\ninterdisciplinary and intercampus collaborations, identify\nnovel approaches to extend academic-government-private\npartnerships, and become a competitive model for external\nfunding opportunities. The overall goal of TIGRE is to\nsupport local, campus and regional user interests and offer\navenues to connect with national Grid projects such as\nOpen Science Grid [5], and TeraGrid [6].\nWithin the energy exploration strategic application area,\nwe have Grid-enabled the ensemble Kalman Filter (EnKF)\n[7] approach for data assimilation in reservoir modeling and\ndemonstrated the extensibility of the application using the\nTIGRE environment and the GridWay [8] metascheduler.\nSection 2 provides an overview of the TIGRE environment\nand capabilities. Application description and the need for\nGrid-enabling EnKF methodology is provided in Section 3.\nThe implementation details and merits of our approach are\ndiscussed in Section 4. Conclusions are provided in Section\n5. Finally, observations and lessons learned are documented\nin Section 6.\n2. TIGRE ENVIRONMENT\nThe TIGRE Grid middleware consists of minimal set of\ncomponents derived from a subset of the Virtual Data\nToolkit (VDT) [9] which supports a variety of operating\nsystems. The purpose of choosing a minimal software stack\nis to support applications at hand, and to simplify\ninstallation and distribution of client/server stacks across\nTIGRE sites. Additional components will be added as they\nbecome necessary. The PacMan [10] packaging and\ndistribution mechanism is employed for TIGRE\nclient/server installation and management. The PacMan\ndistribution mechanism involves retrieval, installation, and\noften configuration of the packaged software. This\napproach allows the clients to keep current, consistent\nversions of TIGRE software. It also helps TIGRE sites to\ninstall the needed components on resources distributed\nthroughout the participating sites. The TIGRE client/server\nstack consists of an authentication and authorization layer,\nGlobus GRAM4-based job submission via web services\n(pre-web services installations are available up on request).\nThe tools for handling Grid proxy generation, Grid-enabled\nfile transfer and Grid-enabled remote login are supported.\nThe pertinent details of TIGRE services and tools for job\nscheduling and management are provided below.\n2.1. Certificate Authority\nThe TIGRE security infrastructure includes a certificate\nauthority (CA) accredited by the International Grid Trust\nFederation (IGTF) for issuing X. 509 user and resource\nGrid certificates [11]. The Texas Advanced Computing\nCenter (TACC), University of Texas at Austin is the\nTIGRE\"s shared CA. The TIGRE Institutions serve as\nRegistration Authorities (RA) for their respective local user\nbase. For up-to-date information on securing user and\nresource certificates and their installation instructions see\nref [2]. The users and hosts on TIGRE are identified by\ntheir distinguished name (DN) in their X.509 certificate\nprovided by the CA. A native Grid-mapfile that contains a\nlist of authorized DNs is used to authenticate and authorize\nuser job scheduling and management on TIGRE site\nresources. At Texas Tech University, the users are\ndynamically allocated one of the many generic pool\naccounts. This is accomplished through the Grid User\nManagement System (GUMS) [12].\n2.2. Job Scheduling and Management\nThe TIGRE environment supports GRAM4-based job\nsubmission via web services. The job submission scripts are\ngenerated using XML. The web services GRAM translates\nthe XML scripts into target cluster specific batch schedulers\nsuch as LSF, PBS, or SGE. The high bandwidth file transfer\nprotocols such as GridFTP are utilized for staging files in\nand out of the target machine. The login to remote hosts for\ncompilation and debugging is only through GSISSH service\nwhich requires resource authentication through X.509\ncertificates. The authentication and authorization of Grid\njobs are managed by issuing Grid certificates to both users\nand hosts. The certificate revocation lists (CRL) are\nupdated on a daily basis to maintain high security standards\nof the TIGRE Grid services. The TIGRE portal [2]\ndocumentation area provides a quick start tutorial on\nrunning jobs on TIGRE.\n2.3. Metascheduler\nThe metascheduler interoperates with the cluster level\nbatch schedulers (such as LSF, PBS) in the overall Grid\nworkflow management. In the present work, we have\nemployed GridWay [8] metascheduler - a Globus incubator\nproject - to schedule and manage jobs across TIGRE.\nThe GridWay is a light-weight metascheduler that fully\nutilizes Globus functionalities. It is designed to provide\nefficient use of dynamic Grid resources by multiple users\nfor Grid infrastructures built on top of Globus services. The\nTIGRE site administrator can control the resource sharing\nthrough a powerful built-in scheduler provided by GridWay\nor by extending GridWay\"s external scheduling module to\nprovide their own scheduling policies. Application users\ncan write job descriptions using GridWay\"s simple and\ndirect job template format (see Section 4 for details) or\nstandard Job Submission Description Language (JSDL).\nSee section 4 for implementation details.\n2.4. Customer Service Management System\nA TIGRE portal [2] was designed and deployed to interface\nusers and resource providers. It was designed using\nGridPort [13] and is maintained by TACC. The TIGRE\nenvironment is supported by open source tools such as the\nOpen Ticket Request System (OTRS) [14] for servicing\ntrouble tickets, and MoinMoin [15] Wiki for TIGRE\ncontent and knowledge management for education, outreach\nand training. The links for OTRS and Wiki are consumed\nby the TIGRE portal [2] - the gateway for users and\nresource providers. The TIGRE resource status and loads\nare monitored by the Grid Port Information Repository\n(GPIR) service of the GridPort toolkit [13] which interfaces\nwith local cluster load monitoring service such as Ganglia.\nThe GPIR utilizes cron jobs on each resource to gather\nsite specific resource characteristics such as jobs that are\nrunning, queued and waiting for resource allocation.\n3. ENSEMBLE KALMAN FILTER\nAPPLICATION\nThe main goal of hydrocarbon reservoir simulations is to\nforecast the production behavior of oil and gas field\n(denoted as field hereafter) for its development and optimal\nmanagement. In reservoir modeling, the field is divided into\nseveral geological models as shown in Figure 1. For\naccurate performance forecasting of the field, it is necessary\nto reconcile several geological models to the dynamic\nresponse of the field through history matching [16-20].\nFigure 1. Cross-sectional view of the Field. Vertical\nlayers correspond to different geological models and the\nnails are oil wells whose historical information will be\nused for forecasting the production behavior.\n(Figure Ref:http://faculty.smu.edu/zchen/research.html).\nThe EnKF is a Monte Carlo method that works with an\nensemble of reservoir models. This method utilizes\ncrosscovariances [21] between the field measurements and the\nreservoir model parameters (derived from several models)\nto estimate prediction uncertainties. The geological model\nparameters in the ensemble are sequentially updated with a\ngoal to minimize the prediction uncertainties. Historical\nproduction response of the field for over 50 years is used in\nthese simulations. The main advantage of EnKF is that it\ncan be readily linked to any reservoir simulator, and can\nassimilate latest production data without the need to re-run\nthe simulator from initial conditions. Researchers in Texas\nare large subscribers of the Schlumberger ECLIPSE [22]\npackage for reservoir simulations. In the reservoir\nmodeling, each geological model checks out an ECLIPSE\nlicense. The simulation runtime of the EnKF methodology\ndepends on the number of geological models used, number\nof ECLIPSE licenses available, production history of the\nfield, and propagated uncertainties in history matching.\nThe overall EnKF workflow is shown Figure 2.\nFigure 2. Ensemble Kaman Filter Data Assimilation\nWorkflow. Each site has L licenses.\nAt START, the master/control process (EnKF main\nprogram) reads the simulation configuration file for number\n(N) of models, and model-specific input files. Then, N\nworking directories are created to store the output files. At\nthe end of iteration, the master/control process collects the\noutput files from N models and post processes\ncrosscovariances [21] to estimate the prediction uncertainties.\nThis information will be used to update models (or input\nfiles) for the next iteration. The simulation continues until\nthe production histories are exhausted.\nTypical EnKF simulation with N=50 and field histories\nof 50-60 years, in time steps ranging from three months to a\nyear, takes about three weeks on a serial computing\nenvironment.\nIn parallel computing environment, there is no\ninterprocess communication between the geological models\nin the ensemble. However, at the end of each simulation\ntime-step, model-specific output files are to be collected for\nanalyzing cross covariances [21] and to prepare next set of\ninput files. Therefore, master-slave model in\nmessagepassing (MPI) environment is a suitable paradigm. In this\napproach, the geological models are treated as slaves and\nare distributed across the available processors. The master\nCluster or (TIGRE/GridWay)\nSTART\nRead Configuration File\nCreate N Working Directories\nCreate N Input files\nModel l Model 2 Model N. . .\nECLIPSE\non site A\nECLIPSE\non Site B\nECLIPSE\non Site Z\nCollect N Model Outputs,\nPost-process Output files\nEND\n. . .\nprocess collects model-specific output files, analyzes and\nprepares next set of input files for the simulation. Since\neach geological model checks out an ECLIPSE license,\nparallelizability of the simulation depends on the number of\nlicenses available. When the available number of licenses is\nless than the number of models in the ensemble, one or\nmore of the nodes in the MPI group have to handle more\nthan one model in a serial fashion and therefore, it takes\nlonger to complete the simulation.\nA Petroleum Engineering Department usually procures\n10-15 ECLIPSE licenses while at least ten-fold increase in\nthe number of licenses would be necessary for industry\nstandard simulations. The number of licenses can be\nincreased by involving several Petroleum Engineering\nDepartments that support ECLIPSE package.\nSince MPI does not scale very well for applications that\ninvolve remote compute clusters, and to get around the\nfirewall issues with license servers across administrative\ndomains, Grid-enabling the EnKF workflow seems to be\nnecessary. With this motivation, we have implemented\nGrid-enabled EnKF workflow for the TIGRE environment\nand demonstrated parallelizability of the application across\nTIGRE using GridWay metascheduler. Further details are\nprovided in the next section.\n4. IMPLEMENTATION DETAILS\nTo Grid-enable the EnKF approach, we have eliminated\nthe MPI code for parallel processing and replaced with N\nsingle processor jobs (or sub-jobs) where, N is the number\nof geological models in the ensemble. These model-specific\nsub-jobs were distributed across TIGRE sites that support\nECLIPSE package using the GridWay [8] metascheduler.\nFor each sub-job, we have constructed a GridWay job\ntemplate that specifies the executable, input and output\nfiles, and resource requirements. Since the TIGRE compute\nresources are not expected to change frequently, we have\nused static resource discovery policy for GridWay and the\nsub-jobs were scheduled dynamically across the TIGRE\nresources using GridWay. Figure 3 represents the sub-job\ntemplate file for the GridWay metascheduler.\nFigure 3. GridWay Sub-Job Template\nIn Figure 3, REQUIREMENTS flag is set to choose the\nresources that satisfy the application requirements. In the\ncase of EnKF application, for example, we need resources\nthat support ECLIPSE package. ARGUMENTS flag\nspecifies the model in the ensemble that will invoke\nECLIPSE at a remote site. INPUT_FILES is prepared by\nthe EnKF main program (or master/control process) and is\ntransferred by GridWay to the remote site where it is\nuntared and is prepared for execution. Finally,\nOUTPUT_FILES specifies the name and location where the\noutput files are to be written.\nThe command-line features of GridWay were used to\ncollect and process the model-specific outputs to prepare\nnew set of input files. This step mimics MPI process\nsynchronization in master-slave model. At the end of each\niteration, the compute resources and licenses are committed\nback to the pool. Table 1 shows the sub-jobs in TIGRE\nGrid via GridWay using gwps command and for clarity,\nonly selected columns were shown\n.\nUSER JID DM EM NAME HOST\npingluo 88 wrap pend enkf.jt antaeus.hpcc.ttu.edu/LSF\npingluo 89 wrap pend enkf.jt antaeus.hpcc.ttu.edu/LSF\npingluo 90 wrap actv enkf.jt minigar.hpcc.ttu.edu/LSF\npingluo 91 wrap pend enkf.jt minigar.hpcc.ttu.edu/LSF\npingluo 92 wrap done enkf.jt cosmos.tamu.edu/PBS\npingluo 93 wrap epil enkf.jt cosmos.tamu.edu/PBS\nTable 1. Job scheduling across TIGRE using GridWay\nMetascheduler. DM: Dispatch state, EM: Execution state,\nJID is the job id and HOST corresponds to site specific\ncluster and its local batch scheduler.\nWhen a job is submitted to GridWay, it will go through a\nseries of dispatch (DM) and execution (EM) states. For\nDM, the states include pend(ing), prol(og), wrap(per),\nepil(og), and done. DM=prol means the job has been\nscheduled to a resource and the remote working directory is\nin preparation. DM=warp implies that GridWay is\nexecuting the wrapper which in turn executes the\napplication. DM=epil implies the job has finished\nrunning at the remote site and results are being transferred\nback to the GridWay server. Similarly, when EM=pend\nimplies the job is waiting in the queue for resource and the\njob is running when EM=actv. For complete list of\nmessage flags and their descriptions, see the documentation\nin ref [8].\nWe have demonstrated the Grid-enabled EnKF runs\nusing GridWay for TIGRE environment. The jobs are so\nchosen that the runtime doesn\"t exceed more than a half\nhour. The simulation runs involved up to 20 jobs between\nA&M and TTU sites with TTU serving 10 licenses. For\nresource information, see Table I.\nOne of the main advantages of Grid-enabled EnKF\nsimulation is that both the resources and licenses are\nreleased back to the pool at the end of each simulation time\nstep unlike in the case of MPI implementation where\nlicenses and nodes are locked until the completion of entire\nsimulation. However, the fact that each sub-job gets\nscheduled independently via GridWay could possibly incur\nanother time delay caused by waiting in queue for execution\nin each simulation time step. Such delays are not expected\nEXECUTABLE=runFORWARD\nREQUIREMENTS=HOSTNAME=cosmos.tamu.edu |\nHOSTNAME=antaeus.hpcc.ttu.edu |\nHOSTNAME=minigar.hpcc.ttu.edu |\nARGUMENTS=001\nINPUT_FILES=001.in.tar\nOUTPUT_FILES=001.out.tar\nin MPI implementation where the node is blocked for\nprocessing sub-jobs (model-specific calculation) until the\nend of the simulation. There are two main scenarios for\ncomparing Grid and cluster computing approaches.\nScenario I: The cluster is heavily loaded. The conceived\naverage waiting time of job requesting large number of\nCPUs is usually longer than waiting time of jobs requesting\nsingle CPU. Therefore, overall waiting time could be\nshorter in Grid approach which requests single CPU for\neach sub-job many times compared to MPI implementation\nthat requests large number of CPUs at a single time. It is\napparent that Grid scheduling is beneficial especially when\ncluster is heavily loaded and requested number of CPUs for\nthe MPI job is not readily available.\nScenario II: The cluster is relatively less loaded or\nlargely available. It appears the MPI implementation is\nfavorable compared to the Grid scheduling. However,\nparallelizability of the EnKF application depends on the\nnumber of ECLIPSE licenses and ideally, the number of\nlicenses should be equal to the number of models in the\nensemble. Therefore, if a single institution does not have\nsufficient number of licenses, the cluster availability doesn\"t\nhelp as much as it is expected.\nSince the collaborative environment such as TIGRE can\naddress both compute and software resource requirements\nfor the EnKF application, Grid-enabled approach is still\nadvantageous over the conventional MPI implementation in\nany of the above scenarios.\n5. CONCLUSIONS AND FUTURE WORK\nTIGRE is a higher education Grid development project\nand its purpose is to sustain and extend research and\neducational opportunities across Texas. Within the energy\nexploration application area, we have Grid-enabled the MPI\nimplementation of the ensemble Kalman filter data\nassimilation methodology for reservoir characterization.\nThis task was accomplished by removing MPI code for\nparallel processing and replacing with single processor jobs\none for each geological model in the ensemble. These\nsingle processor jobs were scheduled across TIGRE via\nGridWay metascheduler. We have demonstrated that by\npooling licenses across TIGRE sites, more geological\nmodels can be handled in parallel and therefore conceivably\nbetter simulation accuracy. This approach has several\nadvantages over MPI implementation especially when a site\nspecific cluster is heavily loaded and/or the number licenses\nrequired for the simulation is more than those available at a\nsingle site.\nTowards the future work, it would be interesting to\ncompare the runtime between MPI, and Grid\nimplementations for the EnKF application. This effort could\nshed light on quality of service (QoS) of Grid environments\nin comparison with cluster computing.\nAnother aspect of interest in the near future would be\nmanaging both compute and license resources to address\nthe job (or processor)-to-license ratio management.\n6. OBSERVATIONS AND LESSIONS\nLEARNED\nThe Grid-enabling efforts for EnKF application have\nprovided ample opportunities to gather insights on the\nvisibility and promise of Grid computing environments for\napplication development and support. The main issues are\nindustry standard data security and QoS comparable to\ncluster computing.\nSince the reservoir modeling research involves\nproprietary data of the field, we had to invest substantial\nefforts initially in educating the application researchers on\nthe ability of Grid services in supporting the industry\nstandard data security through role- and privilege-based\naccess using X.509 standard.\nWith respect to QoS, application researchers expect\ncluster level QoS with Grid environments. Also, there is a\nsteep learning curve in Grid computing compared to the\nconventional cluster computing. Since Grid computing is\nstill an emerging technology, and it spans over several\nadministrative domains, Grid computing is still premature\nespecially in terms of the level of QoS although, it offers\nbetter data security standards compared to commodity\nclusters.\nIt is our observation that training and outreach programs\nthat compare and contrast the Grid and cluster computing\nenvironments would be a suitable approach for enhancing\nuser participation in Grid computing. This approach also\nhelps users to match their applications and abilities Grids\ncan offer.\nIn summary, our efforts through TIGRE in Grid-enabling\nthe EnKF data assimilation methodology showed\nsubstantial promise in engaging Petroleum Engineering\nresearchers through intercampus collaborations. Efforts are\nunder way to involve more schools in this effort. These\nefforts may result in increased collaborative research,\neducational opportunities, and workforce development\nthrough graduate/faculty research programs across TIGRE\nInstitutions.\n7. ACKNOWLEDGMENTS\nThe authors acknowledge the State of Texas for supporting\nthe TIGRE project through the Texas Enterprise Fund, and\nTIGRE Institutions for providing the mechanism, in which\nthe authors (Ravi Vadapalli, Taesung Kim, and Ping Luo)\nare also participating. The authors thank the application\nresearchers Prof. Akhil Datta-Gupta of Texas A&M\nUniversity and Prof. Lloyd Heinze of Texas Tech\nUniversity for their discussions and interest to exploit the\nTIGRE environment to extend opportunities in research and\ndevelopment.\n8. REFERENCES\n[1] Foster, I. and Kesselman, C. (eds.) 2004. The Grid: Blueprint\nfor a new computing infrastructure (The Elsevier series in\nGrid computing)\n[2] TIGRE Portal: http://tigreportal.hipcat.net\n[3] Vadapalli, R. Sill, A., Dooley, R., Murray, M., Luo, P., Kim,\nT., Huang, M., Thyagaraja, K., and Chaffin, D. 2007.\nDemonstration of TIGRE environment for Grid\nenabled/suitable applications. 8th\nIEEE/ACM Int. Conf. on\nGrid Computing, Sept 19-21, Austin\n[4] The High Performance Computing across Texas Consortium\nhttp://www.hipcat.net\n[5] Pordes, R. Petravick, D. Kramer, B. Olson, D. Livny, M.\nRoy, A. Avery, P. Blackburn, K. Wenaus, T. W\u00fcrthwein, F.\nFoster, I. Gardner, R. Wilde, M. Blatecky, A. McGee, J. and\nQuick, R. 2007. The Open Science Grid, J. Phys Conf Series\nhttp://www.iop.org/EJ/abstract/1742-6596/78/1/012057 and\nhttp://www.opensciencegrid.org\n[6] Reed, D.A. 2003. Grids, the TeraGrid and Beyond,\nComputer, vol 30, no. 1 and http://www.teragrid.org\n[7] Evensen, G. 2006. Data Assimilation: The Ensemble Kalman\nFilter, Springer\n[8] Herrera, J. Huedo, E. Montero, R. S. and Llorente, I. M.\n2005. Scientific Programming, vol 12, No. 4. pp 317-331\n[9] Avery, P. and Foster, I. 2001. The GriPhyN project: Towards\npetascale virtual data grids, technical report\nGriPhyN-200115 and http://vdt.cs.wisc.edu\n[10] The PacMan documentation and installation guide\nhttp://physics.bu.edu/pacman/htmls\n[11] Caskey, P. Murray, M. Perez, J. and Sill, A. 2007. Case\nstudies in identify management for virtual organizations,\nEDUCAUSE Southwest Reg. Conf., Feb 21-23, Austin, TX.\nhttp://www.educause.edu/ir/library/pdf/SWR07058.pdf\n[12] The Grid User Management System (GUMS)\nhttps://www.racf.bnl.gov/Facility/GUMS/index.html\n[13] Thomas, M. and Boisseau, J. 2003. Building grid computing\nportals: The NPACI grid portal toolkit, Grid computing:\nmaking the global infrastructure a reality, Chapter 28,\nBerman, F. Fox, G. Thomas, M. Boisseau, J. and Hey, T.\n(eds), John Wiley and Sons, Ltd, Chichester\n[14] Open Ticket Request System http://otrs.org\n[15] The MoinMoin Wiki Engine\nhttp://moinmoin.wikiwikiweb.de\n[16] Vasco, D.W. Yoon, S. and Datta-Gupta, A. 1999. Integrating\ndynamic data into high resolution reservoir models using\nstreamline-based analytic sensitivity coefficients, Society of\nPetroleum Engineers (SPE) Journal, 4 (4).\n[17] Emanuel, A. S. and Milliken, W. J. 1998. History matching\nfinite difference models with 3D streamlines, SPE 49000,\nProc of the Annual Technical Conf and Exhibition, Sept\n2730, New Orleans, LA.\n[18] N\u00e6vdal, G. Johnsen, L.M. Aanonsen, S.I. and Vefring, E.H.\n2003. Reservoir monitoring and Continuous Model Updating\nusing Ensemble Kalman Filter, SPE 84372, Proc of the\nAnnual Technical Conf and Exhibition, Oct 5-8, Denver,\nCO.\n[19] Jafarpour B. and McLaughlin, D.B. 2007. History matching\nwith an ensemble Kalman filter and discrete cosine\nparameterization, SPE 108761, Proc of the Annual Technical\nConf and Exhibition, Nov 11-14, Anaheim, CA\n[20] Li, G. and Reynolds, A. C. 2007. An iterative ensemble\nKalman filter for data assimilation, SPE 109808, Proc of the\nSPE Annual Technical Conf and Exhibition, Nov 11-14,\nAnaheim, CA\n[21] Arroyo-Negrete, E. Devagowda, D. Datta-Gupta, A. 2006.\nStreamline assisted ensemble Kalman filter for rapid and\ncontinuous reservoir model updating. Proc of the Int. Oil &\nGas Conf and Exhibition, SPE 104255, Dec 5-7, China\n[22] ECLIPSE Reservoir Engineering Software\nhttp://www.slb.com/content/services/software/reseng/index.a\nsp", "keywords": "pooling license;grid-enabling;ensemble kalman filter;and gridway;cyberinfrastructure development project;tigre grid computing environment;grid computing;hydrocarbon reservoir simulation;gridway metascheduler;enkf;datum assimilation methodology;high performance computing;tigre;energy exploration;tigre grid middleware;strategic application area;reservoir model"} {"name": "train_C-44", "title": "MSP: Multi-Sequence Positioning of Wireless Sensor Nodes\u2217", "abstract": "Wireless Sensor Networks have been proposed for use in many location-dependent applications. Most of these need to identify the locations of wireless sensor nodes, a challenging task because of the severe constraints on cost, energy and effective range of sensor devices. To overcome limitations in existing solutions, we present a Multi-Sequence Positioning (MSP) method for large-scale stationary sensor node localization in outdoor environments. The novel idea behind MSP is to reconstruct and estimate two-dimensional location information for each sensor node by processing multiple one-dimensional node sequences, easily obtained through loosely guided event distribution. Starting from a basic MSP design, we propose four optimizations, which work together to increase the localization accuracy. We address several interesting issues, such as incomplete (partial) node sequences and sequence flip, found in the Mirage test-bed we built. We have evaluated the MSP system through theoretical analysis, extensive simulation as well as two physical systems (an indoor version with 46 MICAz motes and an outdoor version with 20 MICAz motes). This evaluation demonstrates that MSP can achieve an accuracy within one foot, requiring neither additional costly hardware on sensor nodes nor precise event distribution. It also provides a nice tradeoff between physical cost (anchors) and soft cost (events), while maintaining localization accuracy.", "fulltext": "1 Introduction\nAlthough Wireless Sensor Networks (WSN) have shown\npromising prospects in various applications [5], researchers\nstill face several challenges for massive deployment of such\nnetworks. One of these is to identify the location of\nindividual sensor nodes in outdoor environments. Because of\nunpredictable flow dynamics in airborne scenarios, it is not currently\nfeasible to localize sensor nodes during massive UVA-based\ndeployment. On the other hand, geometric information is\nindispensable in these networks, since users need to know where\nevents of interest occur (e.g., the location of intruders or of a\nbomb explosion).\nPrevious research on node localization falls into two\ncategories: range-based approaches and range-free approaches.\nRange-based approaches [13, 17, 19, 24] compute per-node\nlocation information iteratively or recursively based on\nmeasured distances among target nodes and a few anchors which\nprecisely know their locations. These approaches generally\nrequire costly hardware (e.g., GPS) and have limited\neffective range due to energy constraints (e.g., ultrasound-based\nTDOA [3, 17]). Although range-based solutions can be\nsuitably used in small-scale indoor environments, they are\nconsidered less cost-effective for large-scale deployments. On the\nother hand, range-free approaches [4, 8, 10, 13, 14, 15] do not\nrequire accurate distance measurements, but localize the node\nbased on network connectivity (proximity) information.\nUnfortunately, since wireless connectivity is highly influenced by the\nenvironment and hardware calibration, existing solutions fail\nto deliver encouraging empirical results, or require substantial\nsurvey [2] and calibration [24] on a case-by-case basis.\nRealizing the impracticality of existing solutions for the\nlarge-scale outdoor environment, researchers have recently\nproposed solutions (e.g., Spotlight [20] and Lighthouse [18])\nfor sensor node localization using the spatiotemporal\ncorrelation of controlled events (i.e., inferring nodes\" locations based\non the detection time of controlled events). These solutions\ndemonstrate that long range and high accuracy localization can\nbe achieved simultaneously with little additional cost at\nsensor nodes. These benefits, however, come along with an\nimplicit assumption that the controlled events can be precisely\ndistributed to a specified location at a specified time. We argue\nthat precise event distribution is difficult to achieve, especially\nat large scale when terrain is uneven, the event distribution\ndevice is not well calibrated and its position is difficult to\nmaintain (e.g., the helicopter-mounted scenario in [20]).\nTo address these limitations in current approaches, in this\npaper we present a multi-sequence positioning (MSP) method\n15\nfor large-scale stationary sensor node localization, in\ndeployments where an event source has line-of-sight to all sensors.\nThe novel idea behind MSP is to estimate each sensor node\"s\ntwo-dimensional location by processing multiple easy-to-get\none-dimensional node sequences (e.g., event detection order)\nobtained through loosely-guided event distribution.\nThis design offers several benefits. First, compared to a\nrange-based approach, MSP does not require additional costly\nhardware. It works using sensors typically used by sensor\nnetwork applications, such as light and acoustic sensors, both of\nwhich we specifically consider in this work. Second, compared\nto a range-free approach, MSP needs only a small number of\nanchors (theoretically, as few as two), so high accuracy can be\nachieved economically by introducing more events instead of\nmore anchors. And third, compared to Spotlight, MSP does not\nrequire precise and sophisticated event distribution, an\nadvantage that significantly simplifies the system design and reduces\ncalibration cost.\nThis paper offers the following additional intellectual\ncontributions:\n\u2022 We are the first to localize sensor nodes using the concept\nof node sequence, an ordered list of sensor nodes, sorted\nby the detection time of a disseminated event. We\ndemonstrate that making full use of the information embedded\nin one-dimensional node sequences can significantly\nimprove localization accuracy. Interestingly, we discover\nthat repeated reprocessing of one-dimensional node\nsequences can further increase localization accuracy.\n\u2022 We propose a distribution-based location estimation\nstrategy that obtains the final location of sensor nodes using\nthe marginal probability of joint distribution among\nadjacent nodes within the sequence. This new algorithm\noutperforms the widely adopted Centroid estimation [4, 8].\n\u2022 To the best of our knowledge, this is the first work to\nimprove the localization accuracy of nodes by adaptive\nevents. The generation of later events is guided by\nlocalization results from previous events.\n\u2022 We evaluate line-based MSP on our new Mirage test-bed,\nand wave-based MSP in outdoor environments. Through\nsystem implementation, we discover and address several\ninteresting issues such as partial sequence and sequence\nflips. To reveal MSP performance at scale, we provide\nanalytic results as well as a complete simulation study.\nAll the simulation and implementation code is available\nonline at http://www.cs.umn.edu/\u223czhong/MSP.\nThe rest of the paper is organized as follows. Section 2\nbriefly surveys the related work. Section 3 presents an\noverview of the MSP localization system. In sections 4 and 5,\nbasic MSP and four advanced processing methods are\nintroduced. Section 6 describes how MSP can be applied in a wave\npropagation scenario. Section 7 discusses several\nimplementation issues. Section 8 presents simulation results, and Section 9\nreports an evaluation of MSP on the Mirage test-bed and an\noutdoor test-bed. Section 10 concludes the paper.\n2 Related Work\nMany methods have been proposed to localize wireless\nsensor devices in the open air. Most of these can be\nclassified into two categories: range-based and range-free\nlocalization. Range-based localization systems, such as GPS [23],\nCricket [17], AHLoS [19], AOA [16], Robust\nQuadrilaterals [13] and Sweeps [7], are based on fine-grained\npoint-topoint distance estimation or angle estimation to identify\npernode location. Constraints on the cost, energy and hardware\nfootprint of each sensor node make these range-based\nmethods undesirable for massive outdoor deployment. In addition,\nranging signals generated by sensor nodes have a very limited\neffective range because of energy and form factor concerns.\nFor example, ultrasound signals usually effectively propagate\n20-30 feet using an on-board transmitter [17]. Consequently,\nthese range-based solutions require an undesirably high\ndeployment density. Although the received signal strength\nindicator (RSSI) related [2, 24] methods were once considered\nan ideal low-cost solution, the irregularity of radio\npropagation [26] seriously limits the accuracy of such systems. The\nrecently proposed RIPS localization system [11] superimposes\ntwo RF waves together, creating a low-frequency envelope that\ncan be accurately measured. This ranging technique performs\nvery well as long as antennas are well oriented and\nenvironmental factors such as multi-path effects and background noise\nare sufficiently addressed.\nRange-free methods don\"t need to estimate or measure\naccurate distances or angles. Instead, anchors or controlled-event\ndistributions are used for node localization. Range-free\nmethods can be generally classified into two types: anchor-based\nand anchor-free solutions.\n\u2022 For anchor-based solutions such as Centroid [4], APIT\n[8], SeRLoc [10], Gradient [13] , and APS [15], the main\nidea is that the location of each node is estimated based on\nthe known locations of the anchor nodes. Different anchor\ncombinations narrow the areas in which the target nodes\ncan possibly be located. Anchor-based solutions normally\nrequire a high density of anchor nodes so as to achieve\ngood accuracy. In practice, it is desirable to have as few\nanchor nodes as possible so as to lower the system cost.\n\u2022 Anchor-free solutions require no anchor nodes. Instead,\nexternal event generators and data processing platforms\nare used. The main idea is to correlate the event detection\ntime at a sensor node with the known space-time\nrelationship of controlled events at the generator so that detection\ntime-stamps can be mapped into the locations of sensors.\nSpotlight [20] and Lighthouse [18] work in this fashion.\nIn Spotlight [20], the event distribution needs to be\nprecise in both time and space. Precise event distribution\nis difficult to achieve without careful calibration,\nespecially when the event-generating devices require certain\nmechanical maneuvers (e.g., the telescope mount used in\nSpotlight). All these increase system cost and reduce\nlocalization speed. StarDust [21], which works much faster,\nuses label relaxation algorithms to match light spots\nreflected by corner-cube retro-reflectors (CCR) with sensor\nnodes using various constraints. Label relaxation\nalgorithms converge only when a sufficient number of robust\nconstraints are obtained. Due to the environmental impact\non RF connectivity constraints, however, StarDust is less\naccurate than Spotlight.\nIn this paper, we propose a balanced solution that avoids\nthe limitations of both anchor-based and anchor-free solutions.\nUnlike anchor-based solutions [4, 8], MSP allows a flexible\ntradeoff between the physical cost (anchor nodes) with the soft\n16\n1\nA\nB\n2\n3\n4\n5\nTarget nodeAnchor node\n1A 5 3 B2 4\n1 B2 5A 43\n1A25B4 3\n1 52 AB 4 3\n1\n2\n3\n5\n4\n(b)\n(c)(d)\n(a)\nEvent 1\nNode Sequence generated by event 1\nEvent 3\nNode Sequence generated by event 2\nNode Sequence generated by event 3\nNode Sequence generated by event 4\nEvent 2 Event 4\nFigure 1. The MSP System Overview\ncost (localization events). MSP uses only a small number of\nanchors (theoretically, as few as two). Unlike anchor-free\nsolutions, MSP doesn\"t need to maintain rigid time-space\nrelationships while distributing events, which makes system design\nsimpler, more flexible and more robust to calibration errors.\n3 System Overview\nMSP works by extracting relative location information from\nmultiple simple one-dimensional orderings of nodes.\nFigure 1(a) shows a layout of a sensor network with anchor nodes\nand target nodes. Target nodes are defined as the nodes to be\nlocalized. Briefly, the MSP system works as follows. First,\nevents are generated one at a time in the network area (e.g.,\nultrasound propagations from different locations, laser scans\nwith diverse angles). As each event propagates, as shown in\nFigure 1(a), each node detects it at some particular time\ninstance. For a single event, we call the ordering of nodes, which\nis based on the sequential detection of the event, a node\nsequence. Each node sequence includes both the targets and the\nanchors as shown in Figure 1(b). Second, a multi-sequence\nprocessing algorithm helps to narrow the possible location of\neach node to a small area (Figure 1(c)). Finally, a\ndistributionbased estimation method estimates the exact location of each\nsensor node, as shown in Figure 1(d).\nFigure 1 shows that the node sequences can be obtained\nmuch more economically than accurate pair-wise distance\nmeasurements between target nodes and anchor nodes via\nranging methods. In addition, this system does not require a rigid\ntime-space relationship for the localization events, which is\ncritical but hard to achieve in controlled event distribution\nscenarios (e.g., Spotlight [20]).\nFor the sake of clarity in presentation, we present our system\nin two cases:\n\u2022 Ideal Case, in which all the node sequences obtained\nfrom the network are complete and correct, and nodes are\ntime-synchronized [12, 9].\n\u2022 Realistic Deployment, in which (i) node sequences can\nbe partial (incomplete), (ii) elements in sequences could\nflip (i.e., the order obtained is reversed from reality), and\n(iii) nodes are not time-synchronized.\nTo introduce the MSP algorithm, we first consider a simple\nstraight-line scan scenario. Then, we describe how to\nimplement straight-line scans as well as other event types, such as\nsound wave propagation.\n1\nA\n2\n3\n4\n5\nB\nC\n6\n7\n8\n9\nStraight-line Scan 1\nStraight-lineScan2\n8\n1\n5 A\n6\nC\n4\n3\n7\n2\nB\n9\n3\n1\nC 5\n9\n2 A 4 6\nB\n7 8\nTarget node\nAnchor node\nFigure 2. Obtaining Multiple Node Sequences\n4 Basic MSP\nLet us consider a sensor network with N target nodes and\nM anchor nodes randomly deployed in an area of size S. The\ntop-level idea for basic MSP is to split the whole sensor\nnetwork area into small pieces by processing node sequences.\nBecause the exact locations of all the anchors in a node sequence\nare known, all the nodes in this sequence can be divided into\nO(M +1) parts in the area.\nIn Figure 2, we use numbered circles to denote target nodes\nand numbered hexagons to denote anchor nodes. Basic MSP\nuses two straight lines to scan the area from different directions,\ntreating each scan as an event. All the nodes react to the event\nsequentially generating two node sequences. For vertical scan\n1, the node sequence is (8,1,5,A,6,C,4,3,7,2,B,9), as shown\noutside the right boundary of the area in Figure 2; for\nhorizontal scan 2, the node sequence is (3,1,C,5,9,2,A,4,6,B,7,8),\nas shown under the bottom boundary of the area in Figure 2.\nSince the locations of the anchor nodes are available, the\nanchor nodes in the two node sequences actually split the area\nvertically and horizontally into 16 parts, as shown in Figure 2.\nTo extend this process, suppose we have M anchor nodes and\nperform d scans from different angles, obtaining d node\nsequences and dividing the area into many small parts.\nObviously, the number of parts is a function of the number of\nanchors M, the number of scans d, the anchors\" location as well as\nthe slop k for each scan line. According to the pie-cutting\ntheorem [22], the area can be divided into O(M2d2) parts. When\nM and d are appropriately large, the polygon for each target\nnode may become sufficiently small so that accurate\nestimation can be achieved. We emphasize that accuracy is affected\nnot only by the number of anchors M, but also by the number\nof events d. In other words, MSP provides a tradeoff between\nthe physical cost of anchors and the soft cost of events.\nAlgorithm 1 depicts the computing architecture of basic\nMSP. Each node sequence is processed within line 1 to 8. For\neach node, GetBoundaries() in line 5 searches for the\npredecessor and successor anchors in the sequence so as to\ndetermine the boundaries of this node. Then in line 6 UpdateMap()\nshrinks the location area of this node according to the newly\nobtained boundaries. After processing all sequences, Centroid\nEstimation (line 11) set the center of gravity of the final\npolygon as the estimated location of the target node.\nBasic MSP only makes use of the order information\nbetween a target node and the anchor nodes in each sequence.\nActually, we can extract much more location information from\n17\nAlgorithm 1 Basic MSP Process\nOutput: The estimated location of each node.\n1: repeat\n2: GetOneUnprocessedSeqence();\n3: repeat\n4: GetOneNodeFromSequenceInOrder();\n5: GetBoundaries();\n6: UpdateMap();\n7: until All the target nodes are updated;\n8: until All the node sequences are processed;\n9: repeat\n10: GetOneUnestimatedNode();\n11: CentroidEstimation();\n12: until All the target nodes are estimated;\neach sequence. Section 5 will introduce advanced MSP, in\nwhich four novel optimizations are proposed to improve the\nperformance of MSP significantly.\n5 Advanced MSP\nFour improvements to basic MSP are proposed in this\nsection. The first three improvements do not need additional\nsensing and communication in the networks but require only\nslightly more off-line computation. The objective of all these\nimprovements is to make full use of the information embedded\nin the node sequences. The results we have obtained\nempirically indicate that the implementation of the first two methods\ncan dramatically reduce the localization error, and that the third\nand fourth methods are helpful for some system deployments.\n5.1 Sequence-Based MSP\nAs shown in Figure 2, each scan line and M anchors, splits\nthe whole area into M + 1 parts. Each target node falls into\none polygon shaped by scan lines. We noted that in basic MSP,\nonly the anchors are used to narrow down the polygon of each\ntarget node, but actually there is more information in the node\nsequence that we can made use of.\nLet\"s first look at a simple example shown in Figure 3. The\nprevious scans narrow the locations of target node 1 and node\n2 into two dashed rectangles shown in the left part of Figure 3.\nThen a new scan generates a new sequence (1, 2). With\nknowledge of the scan\"s direction, it is easy to tell that node 1 is\nlocated to the left of node 2. Thus, we can further narrow the\nlocation area of node 2 by eliminating the shaded part of node\n2\"s rectangle. This is because node 2 is located on the right of\nnode 1 while the shaded area is outside the lower boundary of\nnode 1. Similarly, the location area of node 1 can be narrowed\nby eliminating the shaded part out of node 2\"s right boundary.\nWe call this procedure sequence-based MSP which means that\nthe whole node sequence needs to be processed node by node\nin order. Specifically, sequence-based MSP follows this exact\nprocessing rule:\n1\n2\n1 2\n1\n2\nLower boundary of 1 Upper boundary of 1\nLower boundary of 2 Upper boundary of 2\nNew sequence\nNew upper boundary of 1\nNew Lower boundary of 2\nEventPropagation\nFigure 3. Rule Illustration in Sequence Based MSP\nAlgorithm 2 Sequence-Based MSP Process\nOutput: The estimated location of each node.\n1: repeat\n2: GetOneUnprocessedSeqence();\n3: repeat\n4: GetOneNodeByIncreasingOrder();\n5: ComputeLowbound();\n6: UpdateMap();\n7: until The last target node in the sequence;\n8: repeat\n9: GetOneNodeByDecreasingOrder();\n10: ComputeUpbound();\n11: UpdateMap();\n12: until The last target node in the sequence;\n13: until All the node sequences are processed;\n14: repeat\n15: GetOneUnestimatedNode();\n16: CentroidEstimation();\n17: until All the target nodes are estimated;\nElimination Rule: Along a scanning direction, the lower\nboundary of the successor\"s area must be equal to or larger\nthan the lower boundary of the predecessor\"s area, and the\nupper boundary of the predecessor\"s area must be equal to or\nsmaller than the upper boundary of the successor\"s area.\nIn the case of Figure 3, node 2 is the successor of node 1,\nand node 1 is the predecessor of node 2. According to the\nelimination rule, node 2\"s lower boundary cannot be smaller\nthan that of node 1 and node 1\"s upper boundary cannot exceed\nnode 2\"s upper boundary.\nAlgorithm 2 illustrates the pseudo code of sequence-based\nMSP. Each node sequence is processed within line 3 to 13. The\nsequence processing contains two steps:\nStep 1 (line 3 to 7): Compute and modify the lower\nboundary for each target node by increasing order in the node\nsequence. Each node\"s lower boundary is determined by the\nlower boundary of its predecessor node in the sequence, thus\nthe processing must start from the first node in the sequence\nand by increasing order. Then update the map according to the\nnew lower boundary.\nStep 2 (line 8 to 12): Compute and modify the upper\nboundary for each node by decreasing order in the node sequence.\nEach node\"s upper boundary is determined by the upper\nboundary of its successor node in the sequence, thus the processing\nmust start from the last node in the sequence and by\ndecreasing order. Then update the map according to the new upper\nboundary.\nAfter processing all the sequences, for each node, a polygon\nbounding its possible location has been found. Then,\ncenter-ofgravity-based estimation is applied to compute the exact\nlocation of each node (line 14 to 17).\nAn example of this process is shown in Figure 4. The third\nscan generates the node sequence (B,9,2,7,4,6,3,8,C,A,5,1). In\naddition to the anchor split lines, because nodes 4 and 7 come\nafter node 2 in the sequence, node 4 and 7\"s polygons could\nbe narrowed according to node 2\"s lower boundary (the lower\nright-shaded area); similarly, the shaded area in node 2\"s\nrectangle could be eliminated since this part is beyond node 7\"s\nupper boundary indicated by the dotted line. Similar\neliminating can be performed for node 3 as shown in the figure.\n18\n1\nA\n2\n3\n4\n5\nB\nC\n6\n7\n8\n9\nStraight-line Scan 1\nStraight-lineScan2\nStraight-line Scan 3\nTarget node\nAnchor node\nFigure 4. Sequence-Based MSP Example\n1\nA\n2\n3\n4\n5\nB\nC\n6\n7\n8\n9\nStraight-line Scan 1\nStraight-lineScan2\nStraight-line Scan 3\nReprocessing Scan 1\nTarget node\nAnchor node\nFigure 5. Iterative MSP: Reprocessing Scan 1\nFrom above, we can see that the sequence-based MSP\nmakes use of the information embedded in every sequential\nnode pair in the node sequence. The polygon boundaries of\nthe target nodes obtained in prior could be used to further split\nother target nodes\" areas. Our evaluation in Sections 8 and 9\nshows that sequence-based MSP considerably enhances system\naccuracy.\n5.2 Iterative MSP\nSequence-based MSP is preferable to basic MSP because it\nextracts more information from the node sequence. In fact,\nfurther useful information still remains! In sequence-based MSP,\na sequence processed later benefits from information produced\nby previously processed sequences (e.g., the third scan in\nFigure 5). However, the first several sequences can hardly benefit\nfrom other scans in this way. Inspired by this phenomenon,\nwe propose iterative MSP. The basic idea of iterative MSP is\nto process all the sequences iteratively several times so that the\nprocessing of each single sequence can benefit from the results\nof other sequences.\nTo illustrate the idea more clearly, Figure 4 shows the results\nof three scans that have provided three sequences. Now if we\nprocess the sequence (8,1,5,A,6,C,4,3,7,2,B,9) obtained from\nscan 1 again, we can make progress, as shown in Figure 5.\nThe reprocessing of the node sequence 1 provides information\nin the way an additional vertical scan would. From\nsequencebased MSP, we know that the upper boundaries of nodes 3 and\n4 along the scan direction must not extend beyond the upper\nboundary of node 7, therefore the grid parts can be eliminated\n(a) Central of Gravity (b) Joint Distribution\n1 2\n2\n1 1\n2\n1\n2 2\n1\n1 2\n2\n1 1\n2\nFigure 6. Example of Joint Distribution Estimation\n\u2026...\nvm\nap[0]\nvm\nap[1]\nvm\nap[2]\nvm\nap[3]\nCombine\nm\nap\nFigure 7. Idea of DBE MSP for Each Node\nfor the nodes 3 and node 4, respectively, as shown in Figure 5.\nFrom this example, we can see that iterative processing of the\nsequence could help further shrink the polygon of each target\nnode, and thus enhance the accuracy of the system.\nThe implementation of iterative MSP is straightforward:\nprocess all the sequences multiple times using sequence-based\nMSP. Like sequence-based MSP, iterative MSP introduces no\nadditional event cost. In other words, reprocessing does not\nactually repeat the scan physically. Evaluation results in\nSection 8 will show that iterative MSP contributes noticeably to\na lower localization error. Empirical results show that after 5\niterations, improvements become less significant. In summary,\niterative processing can achieve better performance with only\na small computation overhead.\n5.3 Distribution-Based Estimation\nAfter determining the location area polygon for each node,\nestimation is needed for a final decision. Previous research\nmostly applied the Center of Gravity (COG) method [4] [8]\n[10] which minimizes average error. If every node is\nindependent of all others, COG is the statistically best solution. In\nMSP, however, each node may not be independent. For\nexample, two neighboring nodes in a certain sequence could have\noverlapping polygon areas. In this case, if the marginal\nprobability of joint distribution is used for estimation, better\nstatistical results are achieved.\nFigure 6 shows an example in which node 1 and node 2 are\nlocated in the same polygon. If COG is used, both nodes are\nlocalized at the same position (Figure 6(a)). However, the node\nsequences obtained from two scans indicate that node 1 should\nbe to the left of and above node 2, as shown in Figure 6(b).\nThe high-level idea of distribution-based estimation\nproposed for MSP, which we call DBE MSP, is illustrated in\nFigure 7. The distributions of each node under the ith scan (for the\nith node sequence) are estimated in node.vmap[i], which is a\ndata structure for remembering the marginal distribution over\nscan i. Then all the vmaps are combined to get a single map\nand weighted estimation is used to obtain the final location.\nFor each scan, all the nodes are sorted according to the gap,\nwhich is the diameter of the polygon along the direction of the\nscan, to produce a second, gap-based node sequence. Then,\nthe estimation starts from the node with the smallest gap. This\nis because it is statistically more accurate to assume a uniform\ndistribution of the node with smaller gap. For each node\nprocessed in order from the gap-based node sequence, either if\n19\nPred. node\"s area\nPredecessor node exists:\nconditional distribution\nbased on pred. node\"s area\nAlone: Uniformly Distributed\nSucc. node\"s area\nSuccessor node exists:\nconditional distribution\nbased on succ. node\"s area\nSucc. node\"s area\nBoth predecessor and successor\nnodes exist: conditional distribution\nbased on both of them\nPred. node\"s area\nFigure 8. Four Cases in DBE Process\nno neighbor node in the original event-based node sequence\nshares an overlapping area, or if the neighbors have not been\nprocessed due to bigger gaps, a uniform distribution Uniform()\nis applied to this isolated node (the Alone case in Figure 8).\nIf the distribution of its neighbors sharing overlapped areas has\nbeen processed, we calculate the joint distribution for the node.\nAs shown in Figure 8, there are three possible cases\ndepending on whether the distribution of the overlapping predecessor\nand/or successor nodes have/has already been estimated.\nThe estimation\"s strategy of starting from the most accurate\nnode (smallest gap node) reduces the problem of estimation\nerror propagation. The results in the evaluation section indicate\nthat applying distribution-based estimation could give\nstatistically better results.\n5.4 Adaptive MSP\nSo far, all the enhancements to basic MSP focus on\nimproving the multi-sequence processing algorithm given a fixed set\nof scan directions. All these enhancements require only more\ncomputing time without any overhead to the sensor nodes.\nObviously, it is possible to have some choice and optimization on\nhow events are generated. For example, in military situations,\nartillery or rocket-launched mini-ultrasound bombs can be used\nfor event generation at some selected locations. In adaptive\nMSP, we carefully generate each new localization event so as\nto maximize the contribution of the new event to the refinement\nof localization, based on feedback from previous events.\nFigure 9 depicts the basic architecture of adaptive MSP.\nThrough previous localization events, the whole map has been\npartitioned into many small location areas. The idea of\nadaptive MSP is to generate the next localization event to achieve\nbest-effort elimination, which ideally could shrink the location\narea of individual node as much as possible.\nWe use a weighted voting mechanism to evaluate candidate\nlocalization events. Every node wants the next event to split its\narea evenly, which would shrink the area fast. Therefore, every\nnode votes for the parameters of the next event (e.g., the scan\nangle k of the straight-line scan). Since the area map is\nmaintained centrally, the vote is virtually done and there is no need\nfor the real sensor nodes to participate in it. After gathering all\nthe voting results, the event parameters with the most votes win\nthe election. There are two factors that determine the weight of\neach vote:\n\u2022 The vote for each candidate event is weighted according\nto the diameter D of the node\"s location area. Nodes with\nbigger location areas speak louder in the voting, because\nMap Partitioned by the Localization Events\nDiameter of Each\nArea\nCandidate\nLocalization Events\nEvaluation\nTrigger Next\nLocalization Evet\nFigure 9. Basic Architecture of Adaptive MSP\n2\n3\nDiameter D3\n1\n1\n3k\n2\n3k\n3\n3k\n4\n3k\n5\n3k\n6\n3k\n1\n3k 2\n3k 3\n3k\n6\n3k4\n3k 5\n3k\nWeight\nel\nsmall\ni\nopt\ni\nj\nii\nj\ni\nS\nS\nDkkDfkWeight\narg\n),(,()( \u22c5=\u2206=\n1\n3\nopt\nk\nTarget node\nAnchor node\nCenter of Gravity\nNode 3's area\nFigure 10. Candidate Slops for Node 3 at Anchor 1\noverall system error is reduced mostly by splitting the\nlarger areas.\n\u2022 The vote for each candidate event is also weighted\naccording to its elimination efficiency for a location area, which\nis defined as how equally in size (or in diameter) an event\ncan cut an area. In other words, an optimal scan event\ncuts an area in the middle, since this cut shrinks the area\nquickly and thus reduces localization uncertainty quickly.\nCombining the above two aspects, the weight for each vote\nis computed according to the following equation (1):\nWeight(k\nj\ni ) = f(Di,\u25b3(k\nj\ni ,k\nopt\ni )) (1)\nk\nj\ni is node i\"s jth supporting parameter for next event\ngeneration; Di is diameter of node i\"s location area; \u25b3(k\nj\ni ,k\nopt\ni ) is the\ndistance between k\nj\ni and the optimal parameter k\nopt\ni for node i,\nwhich should be defined to fit the specific application.\nFigure 10 presents an example for node 1\"s voting for the\nslopes of the next straight-line scan. In the system, there\nare a fixed number of candidate slopes for each scan (e.g.,\nk1,k2,k3,k4...). The location area of target node 3 is shown\nin the figure. The candidate events k1\n3,k2\n3,k3\n3,k4\n3,k5\n3,k6\n3 are\nevaluated according to their effectiveness compared to the optimal\nideal event which is shown as a dotted line with appropriate\nweights computed according to equation (1). For this\nspecific example, as is illustrated in the right part of Figure 10,\nf(Di,\u25b3(k\nj\ni ,kopt\ni )) is defined as the following equation (2):\nWeight(kj\ni ) = f(Di,\u25b3(kj\ni ,kopt\ni )) = Di \u00b7\nSsmall\nSlarge\n(2)\nSsmall and Slarge are the sizes of the smaller part and larger\npart of the area cut by the candidate line respectively. In this\ncase, node 3 votes 0 for the candidate lines that do not cross its\narea since Ssmall = 0.\nWe show later that adaptive MSP improves localization\naccuracy in WSNs with irregularly shaped deployment areas.\n20\n5.5 Overhead and MSP Complexity Analysis\nThis section provides a complexity analysis of the MSP\ndesign. We emphasize that MSP adopts an asymmetric design in\nwhich sensor nodes need only to detect and report the events.\nThey are blissfully oblivious to the processing methods\nproposed in previous sections. In this section, we analyze the\ncomputational cost on the node sequence processing side, where\nresources are plentiful.\nAccording to Algorithm 1, the computational complexity of\nBasic MSP is O(d \u00b7 N \u00b7 S), and the storage space required is\nO(N \u00b7 S), where d is the number of events, N is the number of\ntarget nodes, and S is the area size.\nAccording to Algorithm 2, the computational complexity of\nboth sequence-based MSP and iterative MSP is O(c\u00b7d \u00b7N \u00b7S),\nwhere c is the number of iterations and c = 1 for\nsequencebased MSP, and the storage space required is O(N \u00b7S). Both the\ncomputational complexity and storage space are equal within a\nconstant factor to those of basic MSP.\nThe computational complexity of the distribution-based\nestimation (DBE MSP) is greater. The major overhead comes\nfrom the computation of joint distributions when both\npredecessor and successor nodes exit. In order to compute the\nmarginal probability, MSP needs to enumerate the locations of\nthe predecessor node and the successor node. For example,\nif node A has predecessor node B and successor node C, then\nthe marginal probability PA(x,y) of node A\"s being at location\n(x,y) is:\nPA(x,y) = \u2211\ni\n\u2211\nj\n\u2211\nm\n\u2211\nn\n1\nNB,A,C\n\u00b7PB(i, j)\u00b7PC(m,n) (3)\nNB,A,C is the number of valid locations for A satisfying the\nsequence (B, A, C) when B is at (i, j) and C is at (m,n);\nPB(i, j) is the available probability of node B\"s being located\nat (i, j); PC(m,n) is the available probability of node C\"s\nbeing located at (m,n). A naive algorithm to compute equation\n(3) has complexity O(d \u00b7 N \u00b7 S3). However, since the marginal\nprobability indeed comes from only one dimension along the\nscanning direction (e.g., a line), the complexity can be reduced\nto O(d \u00b7 N \u00b7 S1.5) after algorithm optimization. In addition, the\nfinal location areas for every node are much smaller than the\noriginal field S; therefore, in practice, DBE MSP can be\ncomputed much faster than O(d \u00b7N \u00b7S1.5).\n6 Wave Propagation Example\nSo far, the description of MSP has been solely in the\ncontext of straight-line scan. However, we note that MSP is\nconceptually independent of how the event is propagated as long\nas node sequences can be obtained. Clearly, we can also\nsupport wave-propagation-based events (e.g., ultrasound\npropagation, air blast propagation), which are polar coordinate\nequivalences of the line scans in the Cartesian coordinate system.\nThis section illustrates the effects of MSP\"s implementation in\nthe wave propagation-based situation. For easy modelling, we\nhave made the following assumptions:\n\u2022 The wave propagates uniformly in all directions,\ntherefore the propagation has a circle frontier surface. Since\nMSP does not rely on an accurate space-time relationship,\na certain distortion in wave propagation is tolerable. If any\ndirectional wave is used, the propagation frontier surface\ncan be modified accordingly.\n1\n3\n5\n9\nTarget node\nAnchor node\nPrevious Event location\nA\n2\nCenter of Gravity\n4\n8\n7\nB\n6\nC\nA line of preferred locations for next event\nFigure 11. Example of Wave Propagation Situation\n\u2022 Under the situation of line-of-sight, we allow obstacles to\nreflect or deflect the wave. Reflection and deflection are\nnot problems because each node reacts only to the first\ndetected event. Those reflected or deflected waves come\nlater than the line-of-sight waves. The only thing the\nsystem needs to maintain is an appropriate time interval\nbetween two successive localization events.\n\u2022 We assume that background noise exists, and therefore we\nrun a band-pass filter to listen to a particular wave\nfrequency. This reduces the chances of false detection.\nThe parameter that affects the localization event generation\nhere is the source location of the event. The different\ndistances between each node and the event source determine the\nrank of each node in the node sequence. Using the node\nsequences, the MSP algorithm divides the whole area into many\nnon-rectangular areas as shown in Figure 11. In this figure,\nthe stars represent two previous event sources. The previous\ntwo propagations split the whole map into many areas by those\ndashed circles that pass one of the anchors. Each node is\nlocated in one of the small areas. Since sequence-based MSP,\niterative MSP and DBE MSP make no assumptions about the\ntype of localization events and the shape of the area, all three\noptimization algorithms can be applied for the wave\npropagation scenario.\nHowever, adaptive MSP needs more explanation. Figure 11\nillustrates an example of nodes\" voting for next event source\nlocations. Unlike the straight-line scan, the critical parameter\nnow is the location of the event source, because the distance\nbetween each node and the event source determines the rank of\nthe node in the sequence. In Figure 11, if the next event breaks\nout along/near the solid thick gray line, which perpendicularly\nbisects the solid dark line between anchor C and the center of\ngravity of node 9\"s area (the gray area), the wave would reach\nanchor C and the center of gravity of node 9\"s area at roughly\nthe same time, which would relatively equally divide node 9\"s\narea. Therefore, node 9 prefers to vote for the positions around\nthe thick gray line.\n7 Practical Deployment Issues\nFor the sake of presentation, until now we have described\nMSP in an ideal case where a complete node sequence can be\nobtained with accurate time synchronization. In this section\nwe describe how to make MSP work well under more realistic\nconditions.\n21\n7.1 Incomplete Node Sequence\nFor diverse reasons, such as sensor malfunction or natural\nobstacles, the nodes in the network could fail to detect\nlocalization events. In such cases, the node sequence will not be\ncomplete. This problem has two versions:\n\u2022 Anchor nodes are missing in the node sequence\nIf some anchor nodes fail to respond to the localization\nevents, then the system has fewer anchors. In this case,\nthe solution is to generate more events to compensate for\nthe loss of anchors so as to achieve the desired accuracy\nrequirements.\n\u2022 Target nodes are missing in the node sequence\nThere are two consequences when target nodes are\nmissing. First, if these nodes are still be useful to sensing\napplications, they need to use other backup localization\napproaches (e.g., Centroid) to localize themselves with help\nfrom their neighbors who have already learned their own\nlocations from MSP. Secondly, since in advanced MSP\neach node in the sequence may contribute to the overall\nsystem accuracy, dropping of target nodes from sequences\ncould also reduce the accuracy of the localization. Thus,\nproper compensation procedures such as adding more\nlocalization events need to be launched.\n7.2 Localization without Time Synchronization\nIn a sensor network without time synchronization support,\nnodes cannot be ordered into a sequence using timestamps. For\nsuch cases, we propose a listen-detect-assemble-report\nprotocol, which is able to function independently without time\nsynchronization.\nlisten-detect-assemble-report requires that every node\nlistens to the channel for the node sequence transmitted from its\nneighbors. Then, when the node detects the localization event,\nit assembles itself into the newest node sequence it has heard\nand reports the updated sequence to other nodes. Figure 12\n(a) illustrates an example for the listen-detect-assemble-report\nprotocol. For simplicity, in this figure we did not differentiate\nthe target nodes from anchor nodes. A solid line between two\nnodes stands for a communication link. Suppose a straight line\nscans from left to right. Node 1 detects the event, and then it\nbroadcasts the sequence (1) into the network. Node 2 and node\n3 receive this sequence. When node 2 detects the event, node\n2 adds itself into the sequence and broadcasts (1, 2). The\nsequence propagates in the same direction with the scan as shown\nin Figure 12 (a). Finally, node 6 obtains a complete sequence\n(1,2,3,5,7,4,6).\nIn the case of ultrasound propagation, because the event\npropagation speed is much slower than that of radio, the\nlistendetect-assemble-report protocol can work well in a situation\nwhere the node density is not very high. For instance, if the\ndistance between two nodes along one direction is 10 meters,\nthe 340m/s sound needs 29.4ms to propagate from one node\nto the other. While normally the communication data rate is\n250Kbps in the WSN (e.g., CC2420 [1]), it takes only about\n2 \u223c 3 ms to transmit an assembled packet for one hop.\nOne problem that may occur using the\nlisten-detectassemble-report protocol is multiple partial sequences as\nshown in Figure 12 (b). Two separate paths in the network may\nresult in two sequences that could not be further combined. In\nthis case, since the two sequences can only be processed as\nseparate sequences, some order information is lost. Therefore the\n1,2,5,4\n1,3,7,4\n1,2,3,5 1,2,3,5,7,4\n1,2,3,5,7\n1,2,3,5\n1,3\n1,2\n1\n2\n3\n5\n7\n4\n6\n1\n1\n1,3 1,2,3,5,7,4,6\n1,2,5\n1,3,7\n1,3\n1,2\n1\n2\n3\n5\n7\n4\n6\n1\n1\n1,3,7,4,6\n1,2,5,4,6\n(a)\n(b)\n(c)\n1,3,2,5 1,3,2,5,7,4\n1,3,2,5,7\n1,3,2,5\n1,3\n1,2\n1\n2\n3\n5\n7\n4\n6\n1\n1\n1,3 1,3,2,5,7,4,6\nEvent Propagation\nEvent Propagation\nEvent Propagation\nFigure 12. Node Sequence without Time Synchronization\naccuracy of the system would decrease.\nThe other problem is the sequence flip problem. As shown\nin Figure 12 (c), because node 2 and node 3 are too close to\neach other along the scan direction, they detect the scan\nalmost simultaneously. Due to the uncertainty such as media\naccess delay, two messages could be transmitted out of order.\nFor example, if node 3 sends out its report first, then the order\nof node 2 and node 3 gets flipped in the final node sequence.\nThe sequence flip problem would appear even in an accurately\nsynchronized system due to random jitter in node detection if\nan event arrives at multiple nodes almost simultaneously. A\nmethod addressing the sequence flip is presented in the next\nsection.\n7.3 Sequence Flip and Protection Band\nSequence flip problems can be solved with and without\ntime synchronization. We firstly start with a scenario\napplying time synchronization. Existing solutions for time\nsynchronization [12, 6] can easily achieve sub-millisecond-level\naccuracy. For example, FTSP [12] achieves 16.9\u00b5s (microsecond)\naverage error for a two-node single-hop case. Therefore, we\ncan comfortably assume that the network is synchronized with\nmaximum error of 1000\u00b5s. However, when multiple nodes are\nlocated very near to each other along the event propagation\ndirection, even when time synchronization with less than 1ms\nerror is achieved in the network, sequence flip may still occur.\nFor example, in the sound wave propagation case, if two nodes\nare less than 0.34 meters apart, the difference between their\ndetection timestamp would be smaller than 1 millisecond.\nWe find that sequence flip could not only damage system\naccuracy, but also might cause a fatal error in the MSP algorithm.\nFigure 13 illustrates both detrimental results. In the left side of\nFigure 13(a), suppose node 1 and node 2 are so close to each\nother that it takes less than 0.5ms for the localization event to\npropagate from node 1 to node 2. Now unfortunately, the node\nsequence is mistaken to be (2,1). So node 1 is expected to be\nlocated to the right of node 2, such as at the position of the\ndashed node 1. According to the elimination rule in\nsequencebased MSP, the left part of node 1\"s area is cut off as shown in\nthe right part of Figure 13(a). This is a potentially fatal error,\nbecause node 1 is actually located in the dashed area which has\nbeen eliminated by mistake. During the subsequent\neliminations introduced by other events, node 1\"s area might be cut off\ncompletely, thus node 1 could consequently be erased from the\nmap! Even in cases where node 1 still survives, its area actually\ndoes not cover its real location.\n22\n1\n2\n12\n2\nLower boundary of 1 Upper boundary of 1\nFlipped Sequence Fatal Elimination Error\nEventPropagation\n1 1\nFatal Error\n1\n1\n2\n12\n2\nLower boundary of 1 Upper boundary of 1\nFlipped Sequence Safe Elimination\nEventPropagation\n1 1\nNew lower boundary of 1\n1\nB\n(a)\n(b)\nB: Protection band\nFigure 13. Sequence Flip and Protection Band\nAnother problem is not fatal but lowers the localization\naccuracy. If we get the right node sequence (1,2), node 1 has a\nnew upper boundary which can narrow the area of node 1 as in\nFigure 3. Due to the sequence flip, node 1 loses this new upper\nboundary.\nIn order to address the sequence flip problem, especially to\nprevent nodes from being erased from the map, we propose\na protection band compensation approach. The basic idea of\nprotection band is to extend the boundary of the location area\na little bit so as to make sure that the node will never be erased\nfrom the map. This solution is based on the fact that nodes\nhave a high probability of flipping in the sequence if they are\nnear to each other along the event propagation direction. If\ntwo nodes are apart from each other more than some distance,\nsay, B, they rarely flip unless the nodes are faulty. The width\nof a protection band B, is largely determined by the maximum\nerror in system time synchronization and the localization event\npropagation speed.\nFigure 13(b) presents the application of the protection band.\nInstead of eliminating the dashed part in Figure 13(a) for node\n1, the new lower boundary of node 1 is set by shifting the\noriginal lower boundary of node 2 to the left by distance B. In this\ncase, the location area still covers node 1 and protects it from\nbeing erased. In a practical implementation, supposing that the\nultrasound event is used, if the maximum error of system time\nsynchronization is 1ms, two nodes might flip with high\nprobability if the timestamp difference between the two nodes is\nsmaller than or equal to 1ms. Accordingly, we set the\nprotection band B as 0.34m (the distance sound can propagate within\n1 millisecond). By adding the protection band, we reduce the\nchances of fatal errors, although at the cost of localization\naccuracy. Empirical results obtained from our physical test-bed\nverified this conclusion.\nIn the case of using the listen-detect-assemble-report\nprotocol, the only change we need to make is to select the protection\nband according to the maximum delay uncertainty introduced\nby the MAC operation and the event propagation speed. To\nbound MAC delay at the node side, a node can drop its report\nmessage if it experiences excessive MAC delay. This converts\nthe sequence flip problem to the incomplete sequence problem,\nwhich can be more easily addressed by the method proposed in\nSection 7.1.\n8 Simulation Evaluation\nOur evaluation of MSP was conducted on three platforms:\n(i) an indoor system with 46 MICAz motes using straight-line\nscan, (ii) an outdoor system with 20 MICAz motes using sound\nwave propagation, and (iii) an extensive simulation under\nvarious kinds of physical settings.\nIn order to understand the behavior of MSP under\nnumerous settings, we start our evaluation with simulations.\nThen, we implemented basic MSP and all the advanced\nMSP methods for the case where time synchronization is\navailable in the network. The simulation and\nimplementation details are omitted in this paper due to space\nconstraints, but related documents [25] are provided online at\nhttp://www.cs.umn.edu/\u223czhong/MSP. Full implementation and\nevaluation of system without time synchronization are yet to be\ncompleted in the near future.\nIn simulation, we assume all the node sequences are perfect\nso as to reveal the performance of MSP achievable in the\nabsence of incomplete node sequences or sequence flips. In our\nsimulations, all the anchor nodes and target nodes are assumed\nto be deployed uniformly. The mean and maximum errors are\naveraged over 50 runs to obtain high confidence. For legibility\nreasons, we do not plot the confidence intervals in this paper.\nAll the simulations are based on the straight-line scan example.\nWe implement three scan strategies:\n\u2022 Random Scan: The slope of the scan line is randomly\nchosen at each time.\n\u2022 Regular Scan: The slope is predetermined to rotate\nuniformly from 0 degree to 180 degrees. For example, if the\nsystem scans 6 times, then the scan angles would be: 0,\n30, 60, 90, 120, and 150.\n\u2022 Adaptive Scan: The slope of each scan is determined\nbased on the localization results from previous scans.\nWe start with basic MSP and then demonstrate the\nperformance improvements one step at a time by adding (i)\nsequencebased MSP, (ii) iterative MSP, (iii) DBE MSP and (iv) adaptive\nMSP.\n8.1 Performance of Basic MSP\nThe evaluation starts with basic MSP, where we compare the\nperformance of random scan and regular scan under different\nconfigurations. We intend to illustrate the impact of the number\nof anchors M, the number of scans d, and target node density\n(number of target nodes N in a fixed-size region) on the\nlocalization error. Table 1 shows the default simulation parameters.\nThe error of each node is defined as the distance between the\nestimated location and the real position. We note that by\ndefault we only use three anchors, which is considerably fewer\nthan existing range-free solutions [8, 4].\nImpact of the Number of Scans: In this experiment, we\ncompare regular scan with random scan under a different number\nof scans from 3 to 30 in steps of 3. The number of anchors\nTable 1. Default Configuration Parameters\nParameter Description\nField Area 200\u00d7200 (Grid Unit)\nScan Type Regular (Default)/Random Scan\nAnchor Number 3 (Default)\nScan Times 6 (Default)\nTarget Node Number 100 (Default)\nStatistics Error Mean/Max\nRandom Seeds 50 runs\n23\n0 5 10 15 20 25 30\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\nMean Error and Max Error VS Scan Time\nScan Time\nError Max Error of Random Scan\nMax Error of Regular Scan\nMean Error of Random Scan\nMean Error of Regular Scan\n(a) Error vs. Number of Scans\n0 5 10 15 20 25 30\n0\n10\n20\n30\n40\n50\n60\nMean Error and Max Error VS Anchor Number\nAnchor Number\nError\nMax Error of Random Scan\nMax Error of Regular Scan\nMean Error of Random Scan\nMean Error of Regular Scan\n(b) Error vs. Anchor Number\n0 50 100 150 200\n10\n20\n30\n40\n50\n60\n70\nMean Error and Max Error VS Target Node Number\nTarget Node Number\nError\nMax Error of Random Scan\nMax Error of Regular Scan\nMean Error of Random Scan\nMean Error of Regular Scan\n(c) Error vs. Number of Target Nodes\nFigure 14. Evaluation of Basic MSP under Random and Regular Scans\n0 5 10 15 20 25 30\n0\n10\n20\n30\n40\n50\n60\n70\nBasic MSP VS Sequence Based MSP II\nScan Time\nError\nMax Error of Basic MSP\nMax Error of Seq MSP\nMean Error of Basic MSP\nMean Error of Seq MSP\n(a) Error vs. Number of Scans\n0 5 10 15 20 25 30\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\nBasic MSP VS Sequence Based MSP I\nAnchor Number\nError\nMax Error of Basic MSP\nMax Error of Seq MSP\nMean Error of Basic MSP\nMean Error of Seq MSP\n(b) Error vs. Anchor Number\n0 50 100 150 200\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\n55\nBasic MSP VS Sequence Based MSP III\nTarget Node Number\nError\nMax Error of Basic MSP\nMax Error of Seq MSP\nMean Error of Basic MSP\nMean Error of Seq MSP\n(c) Error vs. Number of Target Nodes\nFigure 15. Improvements of Sequence-Based MSP over Basic MSP\nis 3 by default. Figure 14(a) indicates the following: (i) as\nthe number of scans increases, the localization error decreases\nsignificantly; for example, localization errors drop more than\n60% from 3 scans to 30 scans; (ii) statistically, regular scan\nachieves better performance than random scan under identical\nnumber of scans. However, the performance gap reduces as\nthe number of scans increases. This is expected since a large\nnumber of random numbers converges to a uniform\ndistribution. Figure 14(a) also demonstrates that MSP requires only\na small number of anchors to perform very well, compared to\nexisting range-free solutions [8, 4].\nImpact of the Number of Anchors: In this experiment, we\ncompare regular scan with random scan under different\nnumber of anchors from 3 to 30 in steps of 3. The results shown in\nFigure 14(b) indicate that (i) as the number of anchor nodes\nincreases, the localization error decreases, and (ii)\nstatistically, regular scan obtains better results than random scan with\nidentical number of anchors. By combining Figures 14(a)\nand 14(b), we can conclude that MSP allows a flexible tradeoff\nbetween physical cost (anchor nodes) and soft cost\n(localization events).\nImpact of the Target Node Density: In this experiment, we\nconfirm that the density of target nodes has no impact on the\naccuracy, which motivated the design of sequence-based MSP.\nIn this experiment, we compare regular scan with random scan\nunder different number of target nodes from 10 to 190 in steps\nof 20. Results in Figure 14(c) show that mean localization\nerrors remain constant across different node densities. However,\nwhen the number of target nodes increases, the average\nmaximum error increases.\nSummary: From the above experiments, we can conclude that\nin basic MSP, regular scan are better than random scan under\ndifferent numbers of anchors and scan events. This is because\nregular scans uniformly eliminate the map from different\ndirections, while random scans would obtain sequences with\nredundant overlapping information, if two scans choose two similar\nscanning slopes.\n8.2 Improvements of Sequence-Based MSP\nThis section evaluates the benefits of exploiting the order\ninformation among target nodes by comparing sequence-based\nMSP with basic MSP. In this and the following sections,\nregular scan is used for straight-line scan event generation. The\npurpose of using regular scan is to keep the scan events and\nthe node sequences identical for both sequence-based MSP and\nbasic MSP, so that the only difference between them is the\nsequence processing procedure.\nImpact of the Number of Scans: In this experiment, we\ncompare sequence-based MSP with basic MSP under different\nnumber of scans from 3 to 30 in steps of 3. Figure 15(a)\nindicates significant performance improvement in sequence-based\nMSP over basic MSP across all scan settings, especially when\nthe number of scans is large. For example, when the number\nof scans is 30, errors in sequence-based MSP are about 20%\nof that of basic MSP. We conclude that sequence-based MSP\nperforms extremely well when there are many scan events.\nImpact of the Number of Anchors: In this experiment, we\nuse different number of anchors from 3 to 30 in steps of 3. As\nseen in Figure 15(b), the mean error and maximum error of\nsequence-based MSP is much smaller than that of basic MSP.\nEspecially when there is limited number of anchors in the\nsystem, e.g., 3 anchors, the error rate was almost halved by\nusing sequence-based MSP. This phenomenon has an interesting\nexplanation: the cutting lines created by anchor nodes are\nexploited by both basic MSP and sequence-based MSP, so as the\n24\n0 2 4 6 8 10\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\nBasic MSP VS Iterative MSP\nIterative Times\nError\nMax Error of Iterative Seq MSP\nMean Error of Iterative Seq MSP\nMax Error of Basic MSP\nMean Error of Basic MSP\nFigure 16. Improvements of Iterative MSP\n0 2 4 6 8 10 12 14 16\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nDBE VS Non\u2212DBE\nError\nCumulativeDistrubutioinFunctions(CDF)\nMean Error CDF of DBE MSP\nMean Error CDF of Non\u2212DBE MSP\nMax Error CDF of DBE MSP\nMax Error CDF of Non\u2212DBE MSP\nFigure 17. Improvements of DBE MSP\n0 20 40 60 80 100\n0\n10\n20\n30\n40\n50\n60\n70\nAdaptive MSP for 500by80\nTarget Node Number\nError\nMax Error of Regualr Scan\nMax Error of Adaptive Scan\nMean Error of Regualr Scan\nMean Error of Adaptive Scan\n(a) Adaptive MSP for 500 by 80 field\n0 10 20 30 40 50\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nMean Error CDF at Different Angle Steps in Adaptive Scan\nMean Error\nCumulativeDistrubutioinFunctions(CDF)\n5 Degree Angle Step Adaptive\n10 Degree Angle Step Adaptive\n20 Degree Angle Step Adaptive\n30 Degree Step Regular Scan\n(b) Impact of the Number of Candidate Events\nFigure 18. The Improvements of Adaptive MSP\nnumber of anchor nodes increases, anchors tend to dominate\nthe contribution. Therefore the performance gaps lessens.\nImpact of the Target Node Density: Figure 15(c)\ndemonstrates the benefits of exploiting order information among\ntarget nodes. Since sequence-based MSP makes use of the\ninformation among the target nodes, having more target nodes\ncontributes to the overall system accuracy. As the number of\ntarget nodes increases, the mean error and maximum error of\nsequence-based MSP decreases. Clearly the mean error in\nbasic MSP is not affected by the number of target nodes, as shown\nin Figure 15(c).\nSummary: From the above experiments, we can conclude that\nexploiting order information among target nodes can improve\naccuracy significantly, especially when the number of events is\nlarge but with few anchors.\n8.3 Iterative MSP over Sequence-Based MSP\nIn this experiment, the same node sequences were processed\niteratively multiple times. In Figure 16, the two single marks\nare results from basic MSP, since basic MSP doesn\"t perform\niterations. The two curves present the performance of\niterative MSP under different numbers of iterations c. We note that\nwhen only a single iteration is used, this method degrades to\nsequence-based MSP. Therefore, Figure 16 compares the three\nmethods to one another.\nFigure 16 shows that the second iteration can reduce the\nmean error and maximum error dramatically. After that, the\nperformance gain gradually reduces, especially when c > 5.\nThis is because the second iteration allows earlier scans to\nexploit the new boundaries created by later scans in the first\niteration. Such exploitation decays quickly over iterations.\n8.4 DBE MSP over Iterative MSP\nFigure 17, in which we augment iterative MSP with\ndistribution-based estimation (DBE MSP), shows that DBE\nMSP could bring about statistically better performance.\nFigure 17 presents cumulative distribution localization errors. In\ngeneral, the two curves of the DBE MSP lay slightly to the left\nof that of non-DBE MSP, which indicates that DBE MSP has\na smaller statistical mean error and averaged maximum error\nthan non-DBE MSP. We note that because DBE is augmented\non top of the best solution so far, the performance\nimprovement is not significant. When we apply DBE on basic MSP\nmethods, the improvement is much more significant. We omit\nthese results because of space constraints.\n8.5 Improvements of Adaptive MSP\nThis section illustrates the performance of adaptive MSP\nover non-adaptive MSP. We note that feedback-based\nadaptation can be applied to all MSP methods, since it affects only\nthe scanning angles but not the sequence processing. In this\nexperiment, we evaluated how adaptive MSP can improve the\nbest solution so far. The default angle granularity (step) for\nadaptive searching is 5 degrees.\nImpact of Area Shape: First, if system settings are regular,\nthe adaptive method hardly contributes to the results. For a\nsquare area (regular), the performance of adaptive MSP and\nregular scans are very close. However, if the shape of the area\nis not regular, adaptive MSP helps to choose the appropriate\nlocalization events to compensate. Therefore, adaptive MSP\ncan achieve a better mean error and maximum error as shown\nin Figure 18(a). For example, adaptive MSP improves\nlocalization accuracy by 30% when the number of target nodes is\n10.\nImpact of the Target Node Density: Figure 18(a) shows that\nwhen the node density is low, adaptive MSP brings more\nbenefit than when node density is high. This phenomenon makes\nstatistical sense, because the law of large numbers tells us that\nnode placement approaches a truly uniform distribution when\nthe number of nodes is increased. Adaptive MSP has an edge\n25\nFigure 19. The Mirage Test-bed (Line Scan) Figure 20. The 20-node Outdoor Experiments (Wave)\nwhen layout is not uniform.\nImpact of Candidate Angle Density: Figure 18(b) shows that\nthe smaller the candidate scan angle step, the better the\nstatistical performance in terms of mean error. The rationale is clear,\nas wider candidate scan angles provide adaptive MSP more\nopportunity to choose the one approaching the optimal angle.\n8.6 Simulation Summary\nStarting from basic MSP, we have demonstrated\nstep-bystep how four optimizations can be applied on top of each other\nto improve localization performance. In other words, these\noptimizations are compatible with each other and can jointly\nimprove the overall performance. We note that our simulations\nwere done under assumption that the complete node sequence\ncan be obtained without sequence flips. In the next section, we\npresent two real-system implementations that reveal and\naddress these practical issues.\n9 System Evaluation\nIn this section, we present a system implementation of MSP\non two physical test-beds. The first one is called Mirage, a\nlarge indoor test-bed composed of six 4-foot by 8-foot boards,\nillustrated in Figure 19. Each board in the system can be used\nas an individual sub-system, which is powered, controlled and\nmetered separately. Three Hitachi CP-X1250 projectors,\nconnected through a Matorx Triplehead2go graphics expansion\nbox, are used to create an ultra-wide integrated display on six\nboards. Figure 19 shows that a long tilted line is generated by\nthe projectors. We have implemented all five versions of MSP\non the Mirage test-bed, running 46 MICAz motes. Unless\nmentioned otherwise, the default setting is 3 anchors and 6 scans at\nthe scanning line speed of 8.6 feet/s. In all of our graphs, each\ndata point represents the average value of 50 trials. In the\noutdoor system, a Dell A525 speaker is used to generate 4.7KHz\nsound as shown in Figure 20. We place 20 MICAz motes in the\nbackyard of a house. Since the location is not completely open,\nsound waves are reflected, scattered and absorbed by various\nobjects in the vicinity, causing a multi-path effect. In the\nsystem evaluation, simple time synchronization mechanisms are\napplied on each node.\n9.1 Indoor System Evaluation\nDuring indoor experiments, we encountered several\nrealworld problems that are not revealed in the simulation. First,\nsequences obtained were partial due to misdetection and\nmessage losses. Second, elements in the sequences could flip due\nto detection delay, uncertainty in media access, or error in time\nsynchronization. We show that these issues can be addressed\nby using the protection band method described in Section 7.3.\n9.1.1 On Scanning Speed and Protection Band\nIn this experiment, we studied the impact of the scanning\nspeed and the length of protection band on the performance of\nthe system. In general, with increasing scanning speed, nodes\nhave less time to respond to the event and the time gap between\ntwo adjacent nodes shrinks, leading to an increasing number of\npartial sequences and sequence flips.\nFigure 21 shows the node flip situations for six scans with\ndistinct angles under different scan speeds. The x-axis shows\nthe distance between the flipped nodes in the correct node\nsequence. y-axis shows the total number of flips in the six scans.\nThis figure tells us that faster scan brings in not only\nincreasing number of flips, but also longer-distance flips that require\nwider protection band to prevent from fatal errors.\nFigure 22(a) shows the effectiveness of the protection band\nin terms of reducing the number of unlocalized nodes. When\nwe use a moderate scan speed (4.3feet/s), the chance of flipping\nis rare, therefore we can achieve 0.45 feet mean accuracy\n(Figure 22(b)) with 1.6 feet maximum error (Figure 22(c)). With\nincreasing speeds, the protection band needs to be set to a larger\nvalue to deal with flipping. Interesting phenomena can be\nobserved in Figures 22: on one hand, the protection band can\nsharply reduce the number of unlocalized nodes; on the other\nhand, protection bands enlarge the area in which a target would\npotentially reside, introducing more uncertainty. Thus there is\na concave curve for both mean and maximum error when the\nscan speed is at 8.6 feet/s.\n9.1.2 On MSP Methods and Protection Band\nIn this experiment, we show the improvements resulting\nfrom three different methods. Figure 23(a) shows that a\nprotection band of 0.35 feet is sufficient for the scan speed of\n8.57feet/s. Figures 23(b) and 23(c) show clearly that iterative\nMSP (with adaptation) achieves best performance. For\nexample, Figures 23(b) shows that when we set the protection band\nat 0.05 feet, iterative MSP achieves 0.7 feet accuracy, which\nis 42% more accurate than the basic design. Similarly,\nFigures 23(b) and 23(c) show the double-edged effects of\nprotection band on the localization accuracy.\n0 5 10 15 20\n0\n20\n40\n(3) Flip Distribution for 6 Scans at Line Speed of 14.6feet/s\nFlips\nNode Distance in the Ideal Node Sequence\n0 5 10 15 20\n0\n20\n40\n(2) Flip Distribution for 6 Scans at Line Speed of 8.6feet/s\nFlips\n0 5 10 15 20\n0\n20\n40\n(1) Flip Distribution for 6 Scans at Line Speed of 4.3feet/s\nFlips\nFigure 21. Number of Flips for Different Scan Speeds\n26\n0 0.2 0.4 0.6 0.8 1\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\nUnlocalized Node Number(Line Scan at Different Speed)\nProtection Band (in feet)\nUnlocalizedNodeNumber\nScan Line Speed: 14.6feet/s\nScan Line Speed: 8.6feet/s\nScan Line Speed: 4.3feet/s\n(a) Number of Unlocalized Nodes\n0 0.2 0.4 0.6 0.8 1\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n1.1\nMean Error(Line Scan at Different Speed)\nProtection Band (in feet)\nError(infeet)\nScan Line Speed:14.6feet/s\nScan Line Speed: 8.6feet/s\nScan Line Speed: 4.3feet/s\n(b) Mean Localization Error\n0 0.2 0.4 0.6 0.8 1\n1.5\n2\n2.5\n3\n3.5\n4\nMax Error(Line Scan at Different Speed)\nProtection Band (in feet)\nError(infeet)\nScan Line Speed: 14.6feet/s\nScan Line Speed: 8.6feet/s\nScan Line Speed: 4.3feet/s\n(c) Max Localization Error\nFigure 22. Impact of Protection Band and Scanning Speed\n0 0.2 0.4 0.6 0.8 1\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\nUnlocalized Node Number(Scan Line Speed 8.57feet/s)\nProtection Band (in feet)\nNumberofunlocalizednodeoutof46\nUnlocalized node of Basic MSP\nUnlocalized node of Sequence Based MSP\nUnlocalized node of Iterative MSP\n(a) Number of Unlocalized Nodes\n0 0.2 0.4 0.6 0.8 1\n0.7\n0.8\n0.9\n1\n1.1\n1.2\n1.3\n1.4\nMean Error(Scan Line Speed 8.57feet/s)\nProtection Band (in feet)\nError(infeet)\nMean Error of Basic MSP\nMean Error of Sequence Based MSP\nMean Error of Iterative MSP\n(b) Mean Localization Error\n0 0.2 0.4 0.6 0.8 1\n1.5\n2\n2.5\n3\n3.5\n4\nMax Error(Scan Line Speed 8.57feet/s)\nProtection Band (in feet)\nError(infeet)\nMax Error of Basic MSP\nMax Error of Sequence Based MSP\nMax Error of Iterative MSP\n(c) Max Localization Error\nFigure 23. Impact of Protection Band under Different MSP Methods\n3 4 5 6 7 8 9 10 11\n0\n0.5\n1\n1.5\n2\n2.5\nUnlocalized Node Number(Protection Band: 0.35 feet)\nAnchor Number\nUnlocalizedNodeNumber\n4 Scan Events at Speed 8.75feet/s\n6 Scan Events at Speed 8.75feet/s\n8 Scan Events at Speed 8.75feet/s\n(a) Number of Unlocalized Nodes\n3 4 5 6 7 8 9 10 11\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nMean Error(Protection Band: 0.35 feet)\nAnchor Number\nError(infeet)\nMean Error of 4 Scan Events at Speed 8.75feet/s\nMean Error of 6 Scan Events at Speed 8.75feet/s\nMean Error of 8 Scan Events at Speed 8.75feet/s\n(b) Mean Localization Error\n3 4 5 6 7 8 9 10 11\n0.8\n1\n1.2\n1.4\n1.6\n1.8\n2\n2.2\n2.4\n2.6\n2.8\nMax Error(Protection Band: 0.35 feet)\nAnchor Number\nError(infeet)\nMax Error of 4 Scan Events at Speed 8.75feet/s\nMax Error of 6 Scan Events at Speed 8.75feet/s\nMax Error of 8 Scan Events at Speed 8.75feet/s\n(c) Max Localization Error\nFigure 24. Impact of the Number of Anchors and Scans\n9.1.3 On Number of Anchors and Scans\nIn this experiment, we show a tradeoff between hardware\ncost (anchors) with soft cost (events). Figure 24(a) shows that\nwith more cutting lines created by anchors, the chance of\nunlocalized nodes increases slightly. We note that with a 0.35 feet\nprotection band, the percentage of unlocalized nodes is very\nsmall, e.g., in the worst-case with 11 anchors, only 2 out of 46\nnodes are not localized due to flipping. Figures 24(b) and 24(c)\nshow the tradeoff between number of anchors and the number\nof scans. Obviously, with the number of anchors increases, the\nerror drops significantly. With 11 anchors we can achieve a\nlocalization accuracy as low as 0.25 \u223c 0.35 feet, which is nearly a\n60% improvement. Similarly, with increasing number of scans,\nthe accuracy drops significantly as well. We can observe about\n30% across all anchor settings when we increase the number of\nscans from 4 to 8. For example, with only 3 anchors, we can\nachieve 0.6-foot accuracy with 8 scans.\n9.2 Outdoor System Evaluation\nThe outdoor system evaluation contains two parts: (i)\neffective detection distance evaluation, which shows that the\nnode sequence can be readily obtained, and (ii) sound\npropagation based localization, which shows the results of\nwavepropagation-based localization.\n9.2.1 Effective Detection Distance Evaluation\nWe firstly evaluate the sequence flip phenomenon in wave\npropagation. As shown in Figure 25, 20 motes were placed as\nfive groups in front of the speaker, four nodes in each group\nat roughly the same distances to the speaker. The gap between\neach group is set to be 2, 3, 4 and 5 feet respectively in four\nexperiments. Figure 26 shows the results. The x-axis in each\nsubgraph indicates the group index. There are four nodes in each\ngroup (4 bars). The y-axis shows the detection rank (order)\nof each node in the node sequence. As distance between each\ngroup increases, number of flips in the resulting node sequence\n27\nFigure 25. Wave Detection\n1 2 3 4 5\n0\n5\n10\n15\n20\n2 feet group distance\nRank\nGroup Index\n1 2 3 4 5\n0\n5\n10\n15\n20\n3 feet group distance\nRank\nGroup Index\n1 2 3 4 5\n0\n5\n10\n15\n20\n4 feet group distance\nRank\nGroup Index\n1 2 3 4 5\n0\n5\n10\n15\n20\n5 feet group distance\nRank\nGroup Index\nFigure 26. Ranks vs. Distances\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n22\n24\n0 2 4 6 8 10 12 14\nY-Dimension(feet)\nX-Dimension (feet)\nNode\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n22\n24\n0 2 4 6 8 10 12 14\nY-Dimension(feet)\nX-Dimension (feet)\nAnchor\nFigure 27. Localization Error (Sound)\ndecreases. For example, in the 2-foot distance subgraph, there\nare quite a few flips between nodes in adjacent and even\nnonadjacent groups, while in the 5-foot subgraph, flips between\ndifferent groups disappeared in the test.\n9.2.2 Sound Propagation Based Localization\nAs shown in Figure 20, 20 motes are placed as a grid\nincluding 5 rows with 5 feet between each row and 4 columns with\n4 feet between each column. Six 4KHz acoustic wave\npropagation events are generated around the mote grid by a speaker.\nFigure 27 shows the localization results using iterative MSP\n(3 times iterative processing) with a protection band of 3 feet.\nThe average error of the localization results is 3 feet and the\nmaximum error is 5 feet with one un-localized node.\nWe found that sequence flip in wave propagation is more\nsevere than that in the indoor, line-based test. This is expected\ndue to the high propagation speed of sound. Currently we use\nMICAz mote, which is equipped with a low quality\nmicrophone. We believe that using a better speaker and more events,\nthe system can yield better accuracy. Despite the hardware\nconstrains, the MSP algorithm still successfully localized most of\nthe nodes with good accuracy.\n10 Conclusions\nIn this paper, we present the first work that exploits the\nconcept of node sequence processing to localize sensor nodes. We\ndemonstrated that we could significantly improve localization\naccuracy by making full use of the information embedded in\nmultiple easy-to-get one-dimensional node sequences. We\nproposed four novel optimization methods, exploiting order and\nmarginal distribution among non-anchor nodes as well as the\nfeedback information from early localization results.\nImportantly, these optimization methods can be used together, and\nimprove accuracy additively. The practical issues of partial\nnode sequence and sequence flip were identified and addressed\nin two physical system test-beds. We also evaluated\nperformance at scale through analysis as well as extensive\nsimulations. Results demonstrate that requiring neither costly\nhardware on sensor nodes nor precise event distribution, MSP can\nachieve a sub-foot accuracy with very few anchor nodes\nprovided sufficient events.\n11 References\n[1] CC2420 Data Sheet. Avaiable at http://www.chipcon.com/.\n[2] P. Bahl and V. N. Padmanabhan. Radar: An In-Building RF-Based User\nLocation and Tracking System. In IEEE Infocom \"00.\n[3] M. Broxton, J. Lifton, and J. Paradiso. Localizing A Sensor Network via\nCollaborative Processing of Global Stimuli. In EWSN \"05.\n[4] N. Bulusu, J. Heidemann, and D. Estrin. GPS-Less Low Cost Outdoor\nLocalization for Very Small Devices. IEEE Personal Communications\nMagazine, 7(4), 2000.\n[5] D. Culler, D. Estrin, and M. Srivastava. Overview of Sensor Networks.\nIEEE Computer Magazine, 2004.\n[6] J. Elson, L. Girod, and D. Estrin. Fine-Grained Network Time\nSynchronization Using Reference Broadcasts. In OSDI \"02.\n[7] D. K. Goldenberg, P. Bihler, M. Gao, J. Fang, B. D. Anderson, A. Morse,\nand Y. Yang. Localization in Sparse Networks Using Sweeps. In\nMobiCom \"06.\n[8] T. He, C. Huang, B. M. Blum, J. A. Stankovic, and T. Abdelzaher.\nRangeFree Localization Schemes in Large-Scale Sensor Networks. In\nMobiCom \"03.\n[9] B. Kusy, P. Dutta, P. Levis, M. Mar, A. Ledeczi, and D. Culler. Elapsed\nTime on Arrival: A Simple and Versatile Primitive for Canonical Time\nSynchronization Services. International Journal of ad-hoc and\nUbiquitous Computing, 2(1), 2006.\n[10] L. Lazos and R. Poovendran. SeRLoc: Secure Range-Independent\nLocalization for Wireless Sensor Networks. In WiSe \"04.\n[11] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar,\nS. Dora, and A. Ledeczi. Radio Interferometric Geolocation. In\nSenSys \"05.\n[12] M. Maroti, B. Kusy, G. Simon, and A. Ledeczi. The Flooding Time\nSynchronization Protocol. In SenSys \"04.\n[13] D. Moore, J. Leonard, D. Rus, and S. Teller. Robust Distributed Network\nLocalization with Noise Range Measurements. In SenSys \"04.\n[14] R. Nagpal and D. Coore. An Algorithm for Group Formation in an\nAmorphous Computer. In PDCS \"98.\n[15] D. Niculescu and B. Nath. ad-hoc Positioning System. In GlobeCom\n\"01.\n[16] D. Niculescu and B. Nath. ad-hoc Positioning System (APS) Using\nAOA. In InfoCom \"03.\n[17] N. B. Priyantha, A. Chakraborty, and H. Balakrishnan. The Cricket\nLocation-Support System. In MobiCom \"00.\n[18] K. R\u00a8omer. The Lighthouse Location System for Smart Dust. In MobiSys\n\"03.\n[19] A. Savvides, C. C. Han, and M. B. Srivastava. Dynamic Fine-Grained\nLocalization in ad-hoc Networks of Sensors. In MobiCom \"01.\n[20] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke. A High-Accuracy,\nLow-Cost Localization System for Wireless Sensor Networks. In SenSys\n\"05.\n[21] R. Stoleru, P. Vicaire, T. He, and J. A. Stankovic. StarDust: a Flexible\nArchitecture for Passive Localization in Wireless Sensor Networks. In\nSenSys \"06.\n[22] E. W. Weisstein. Plane Division by Lines. mathworld.wolfram.com.\n[23] B. H. Wellenhoff, H. Lichtenegger, and J. Collins. Global Positions\nSystem: Theory and Practice,Fourth Edition. Springer Verlag, 1997.\n[24] K. Whitehouse. The Design of Calamari: an ad-hoc Localization System\nfor Sensor Networks. In University of California at Berkeley, 2002.\n[25] Z. Zhong. MSP Evaluation and Implementation Report. Avaiable at\nhttp://www.cs.umn.edu/\u223czhong/MSP.\n[26] G. Zhou, T. He, and J. A. Stankovic. Impact of Radio Irregularity on\nWireless Sensor Networks. In MobiSys \"04.\n28", "keywords": "marginal distribution;node localization;multi-sequence positioning;listen-detect-assemble-report protocol;event distribution;range-based approach;spatiotemporal correlation;localization;node sequence process;distribution-based location estimation;massive uva-based deploment;wireless sensor network"} {"name": "train_C-45", "title": "StarDust: A Flexible Architecture for Passive Localization in Wireless Sensor Networks", "abstract": "The problem of localization in wireless sensor networks where nodes do not use ranging hardware, remains a challenging problem, when considering the required location accuracy, energy expenditure and the duration of the localization phase. In this paper we propose a framework, called StarDust, for wireless sensor network localization based on passive optical components. In the StarDust framework, sensor nodes are equipped with optical retro-reflectors. An aerial device projects light towards the deployed sensor network, and records an image of the reflected light. An image processing algorithm is developed for obtaining the locations of sensor nodes. For matching a node ID to a location we propose a constraint-based label relaxation algorithm. We propose and develop localization techniques based on four types of constraints: node color, neighbor information, deployment time for a node and deployment location for a node. We evaluate the performance of a localization system based on our framework by localizing a network of 26 sensor nodes deployed in a 120 \u00d7 60 ft2 area. The localization accuracy ranges from 2 ft to 5 ft while the localization time ranges from 10 milliseconds to 2 minutes.", "fulltext": "1 Introduction\nWireless Sensor Networks (WSN) have been envisioned\nto revolutionize the way humans perceive and interact with\nthe surrounding environment. One vision is to embed tiny\nsensor devices in outdoor environments, by aerial\ndeployments from unmanned air vehicles. The sensor nodes form\na network and collaborate (to compensate for the extremely\nscarce resources available to each of them: computational\npower, memory size, communication capabilities) to\naccomplish the mission. Through collaboration, redundancy and\nfault tolerance, the WSN is then able to achieve\nunprecedented sensing capabilities.\nA major step forward has been accomplished by\ndeveloping systems for several domains: military surveillance [1]\n[2] [3], habitat monitoring [4] and structural monitoring [5].\nEven after these successes, several research problems remain\nopen. Among these open problems is sensor node\nlocalization, i.e., how to find the physical position of each sensor\nnode. Despite the attention the localization problem in WSN\nhas received, no universally acceptable solution has been\ndeveloped. There are several reasons for this. On one hand,\nlocalization schemes that use ranging are typically high end\nsolutions. GPS ranging hardware consumes energy, it is\nrelatively expensive (if high accuracy is required) and poses\nform factor challenges that move us away from the vision\nof dust size sensor nodes. Ultrasound has a short range and\nis highly directional. Solutions that use the radio transceiver\nfor ranging either have not produced encouraging results (if\nthe received signal strength indicator is used) or are sensitive\nto environment (e.g., multipath). On the other hand,\nlocalization schemes that only use the connectivity information\nfor inferring location information are characterized by low\naccuracies: \u2248 10 ft in controlled environments, 40\u221250 ft in\nrealistic ones.\nTo address these challenges, we propose a framework for\nWSN localization, called StarDust, in which the\ncomplexity associated with the node localization is completely\nremoved from the sensor node. The basic principle of the\nframework is localization through passivity: each sensor\nnode is equipped with a corner-cube retro-reflector and\npossibly an optical filter (a coloring device). An aerial\nvehicle projects light onto the deployment area and records\nimages containing retro-reflected light beams (they appear as\nluminous spots). Through image processing techniques, the\nlocations of the retro-reflectors (i.e., sensor nodes) is\ndeter57\nmined. For inferring the identity of the sensor node present\nat a particular location, the StarDust framework develops a\nconstraint-based node ID relaxation algorithm.\nThe main contributions of our work are the following. We\npropose a novel framework for node localization in WSNs\nthat is very promising and allows for many future extensions\nand more accurate results. We propose a constraint-based\nlabel relaxation algorithm for mapping node IDs to the\nlocations, and four constraints (node, connectivity, time and\nspace), which are building blocks for very accurate and very\nfast localization systems. We develop a sensor node\nhardware prototype, called a SensorBall. We evaluate the\nperformance of a localization system for which we obtain location\naccuracies of 2 \u2212 5 ft with a localization duration ranging\nfrom 10 milliseconds to 2 minutes. We investigate the range\nof a system built on our framework by considering realities\nof physical phenomena that occurs during light propagation\nthrough the atmosphere.\nThe rest of the paper is structured as follows. Section 2\nis an overview of the state of art. The design of the\nStarDust framework is presented in Section 3. One\nimplementation and its performance evaluation are in Sections 4 and\n5, followed by a suite of system optimization techniques, in\nSection 6. In Section 7 we present our conclusions.\n2 Related Work\nWe present the prior work in localization in two major\ncategories: the range-based, and the range-free schemes.\nThe range-based localization techniques have been\ndesigned to use either more expensive hardware (and hence\nhigher accuracy) or just the radio transceiver. Ranging\ntechniques dependent on hardware are the time-of-flight (ToF)\nand the time-difference-of-arrival(TDoA). Solutions that use\nthe radio are based on the received signal strength indicator\n(RSSI) and more recently on radio interferometry.\nThe ToF localization technique that is most widely used is\nthe GPS. GPS is a costly solution for a high accuracy\nlocalization of a large scale sensor network. AHLoS [6] employs\na TDoA ranging technique that requires extensive hardware\nand solves relatively large nonlinear systems of equations.\nThe Cricket location-support system (TDoA) [7] can achieve\na location granularity of tens of inches with highly\ndirectional and short range ultrasound transceivers. In [2] the\nlocation of a sniper is determined in an urban terrain, by\nusing the TDoA between an acoustic wave and a radio beacon.\nThe PushPin project [8] uses the TDoA between ultrasound\npulses and light flashes for node localization. The RADAR\nsystem [9] uses the RSSI to build a map of signal strengths\nas emitted by a set of beacon nodes. A mobile node is\nlocated by the best match, in the signal strength space, with a\npreviously acquired signature. In MAL [10], a mobile node\nassists in measuring the distances (acting as constraints)\nbetween nodes until a rigid graph is generated. The localization\nproblem is formulated as an on-line state estimation in a\nnonlinear dynamic system [11]. A cooperative ranging that\nattempts to achieve a global positioning from distributed local\noptimizations is proposed in [12]. A very recent, remarkable,\nlocalization technique is based on radio interferometry, RIPS\n[13], which utilizes two transmitters to create an interfering\nsignal. The frequencies of the emitters are very close to each\nother, thus the interfering signal will have a low frequency\nenvelope that can be easily measured. The ranging technique\nperforms very well. The long time required for localization\nand multi-path environments pose significant challenges.\nReal environments create additional challenges for the\nrange based localization schemes. These have been\nemphasized by several studies [14] [15] [16]. To address these\nchallenges, and others (hardware cost, the energy expenditure,\nthe form factor, the small range, localization time), several\nrange-free localization schemes have been proposed. Sensor\nnodes use primarily connectivity information for inferring\nproximity to a set of anchors. In the Centroid localization\nscheme [17], a sensor node localizes to the centroid of its\nproximate beacon nodes. In APIT [18] each node decides its\nposition based on the possibility of being inside or outside of\na triangle formed by any three beacons within node\"s\ncommunication range. The Gradient algorithm [19], leverages\nthe knowledge about the network density to infer the average\none hop length. This, in turn, can be transformed into\ndistances to nodes with known locations. DV-Hop [20] uses the\nhop by hop propagation capability of the network to forward\ndistances to landmarks. More recently, several localization\nschemes that exploit the sensing capabilities of sensor nodes,\nhave been proposed. Spotlight [21] creates well controlled\n(in time and space) events in the network while the sensor\nnodes detect and timestamp this events. From the\nspatiotemporal knowledge for the created events and the temporal\ninformation provided by sensor nodes, nodes\" spatial\ninformation can be obtained. In a similar manner, the Lighthouse\nsystem [22] uses a parallel light beam, that is emitted by an\nanchor which rotates with a certain period. A sensor node\ndetects the light beam for a period of time, which is\ndependent on the distance between it and the light emitting device.\nMany of the above localization solutions target specific\nsets of requirements and are useful for specific applications.\nStarDust differs in that it addresses a particular demanding\nset of requirements that are not yet solved well. StarDust is\nmeant for localizing air dropped nodes where node\npassiveness, high accuracy, low cost, small form factor and rapid\nlocalization are all required. Many military applications have\nsuch requirements.\n3 StarDust System Design\nThe design of the StarDust system (and its name) was\ninspired by the similarity between a deployed sensor network,\nin which sensor nodes indicate their presence by emitting\nlight, and the Universe consisting of luminous and\nilluminated objects: stars, galaxies, planets, etc.\nThe main difficulty when applying the above ideas to the\nreal world is the complexity of the hardware that needs to\nbe put on a sensor node so that the emitted light can be\ndetected from thousands of feet. The energy expenditure for\nproducing an intense enough light beam is also prohibitive.\nInstead, what we propose to use for sensor node\nlocalization is a passive optical element called a retro-reflector.\nThe most common retro-reflective optical component is a\nCorner-Cube Retroreflector (CCR), shown in Figure 1(a). It\nconsists of three mutually perpendicular mirrors. The\ninter58\n(a) (b)\nFigure 1. Corner-Cube Retroreflector (a) and an array of\nCCRs molded in plastic (b)\nesting property of this optical component is that an incoming\nbeam of light is reflected back, towards the source of the\nlight, irrespective of the angle of incidence. This is in\ncontrast with a mirror, which needs to be precisely positioned to\nbe perpendicular to the incident light. A very common and\ninexpensive implementation of an array of CCRs is the\nretroreflective plastic material used on cars and bicycles for night\ntime detection, shown in Figure 1(b).\nIn the StarDust system, each node is equipped with a\nsmall (e.g. 0.5in2) array of CCRs and the enclosure has\nself-righting capabilities that orient the array of CCRs\npredominantly upwards. It is critical to understand that the\nupward orientation does not need to be exact. Even when large\nangular variations from a perfectly upward orientation are\npresent, a CCR will return the light in the exact same\ndirection from which it came.\nIn the remaining part of the section, we present the\narchitecture of the StarDust system and the design of its main\ncomponents.\n3.1 System Architecture\nThe envisioned sensor network localization scenario is as\nfollows:\n\u2022 The sensor nodes are released, possibly in a controlled\nmanner, from an aerial vehicle during the night.\n\u2022 The aerial vehicle hovers over the deployment area and\nuses a strobe light to illuminate it. The sensor nodes,\nequipped with CCRs and optical filters (acting as\ncoloring devices) have self-righting capabilities and\nretroreflect the incoming strobe light. The retro-reflected\nlight is either white, as the originating source light,\nor colored, due to optical filters.\n\u2022 The aerial vehicle records a sequence of two images\nvery close in time (msec level). One image is taken\nwhen the strobe light is on, the other when the strobe\nlight is off. The acquired images are used for obtaining\nthe locations of sensor nodes (which appear as luminous\nspots in the image).\n\u2022 The aerial vehicle executes the mapping of node IDs to\nthe identified locations in one of the following ways: a)\nby using the color of a retro-reflected light, if a sensor\nnode has a unique color; b) by requiring sensor nodes\nto establish neighborhood information and report it to\na base station; c) by controlling the time sequence of\nsensor nodes deployment and recording additional\nimLight Emitter\nSensor Node i\nTransfer Function\n\u03a6i(\u03bb)\n\u03a8(\u03bb)\n\u03a6(\u03a8(\u03bb))\nImage\nProcessing\nNode ID Matching\nRadio Model\nR\nG(\u039b,E)\nCentral Device\nV\"\nV\"\nFigure 2. The StarDust system architecture\nages; d) by controlling the location where a sensor node\nis deployed.\n\u2022 The computed locations are disseminated to the sensor\nnetwork.\nThe architecture of the StarDust system is shown in\nFigure 2. The architecture consists of two main components:\nthe first is centralized and it is located on a more powerful\ndevice. The second is distributed and it resides on all\nsensor nodes. The Central Device consists of the following: the\nLight Emitter, the Image Processing module, the Node ID\nMapping module and the Radio Model. The distributed\ncomponent of the architecture is the Transfer Function, which\nacts as a filter for the incoming light. The aforementioned\nmodules are briefly described below:\n\u2022 Light Emitter - It is a strobe light, capable of producing\nvery intense, collimated light pulses. The emitted light\nis non-monochromatic (unlike a laser) and it is\ncharacterized by a spectral density \u03a8(\u03bb), a function of the\nwavelength. The emitted light is incident on the CCRs\npresent on sensor nodes.\n\u2022 Transfer Function \u03a6(\u03a8(\u03bb)) - This is a bandpass filter\nfor the incident light on the CCR. The filter allows a\nportion of the original spectrum, to be retro-reflected.\nFrom here on, we will refer to the transfer function as\nthe color of a sensor node.\n\u2022 Image Processing - The Image Processing module\nacquires high resolution images. From these images the\nlocations and the colors of sensor nodes are obtained.\nIf only one set of pictures can be taken (i.e., one\nlocation of the light emitter/image analysis device), then the\nmap of the field is assumed to be known as well as the\ndistance between the imaging device and the field. The\naforementioned assumptions (field map and distance to\nit) are not necessary if the images can be simultaneously\ntaken from different locations. It is important to remark\nhere that the identity of a node can not be directly\nobtained through Image Processing alone, unless a\nspecific characteristic of a sensor node can be identified in\nthe image.\n\u2022 Node ID Matching - This module uses the detected\nlocations and through additional techniques (e.g., sensor\nnode coloring and connectivity information (G(\u039b,E))\nfrom the deployed network) to uniquely identify the\nsensor nodes observed in the image. The connectivity\ninformation is represented by neighbor tables sent from\n59\nAlgorithm 1 Image Processing\n1: Background filtering\n2: Retro-reflected light recognition through intensity\nfiltering\n3: Edge detection to obtain the location of sensor nodes\n4: Color identification for each detected sensor node\neach sensor node to the Central Device.\n\u2022 Radio Model - This component provides an estimate of\nthe radio range to the Node ID Matching module. It\nis only used by node ID matching techniques that are\nbased on the radio connectivity in the network. The\nestimate of the radio range R is based on the sensor node\ndensity (obtained through the Image Processing\nmodule) and the connectivity information (i.e., G(\u039b,E)).\nThe two main components of the StarDust architecture\nare the Image Processing and the Node ID Mapping. Their\ndesign and analysis is presented in the sections that follow.\n3.2 Image Processing\nThe goal of the Image Processing Algorithm (IPA) is to\nidentify the location of the nodes and their color. Note that\nIPA does not identify which node fell where, but only what\nis the set of locations where the nodes fell.\nIPA is executed after an aerial vehicle records two\npictures: one in which the field of deployment is illuminated and\none when no illuminations is present. Let Pdark be the\npicture of the deployment area, taken when no light was emitted\nand Plight be the picture of the same deployment area when a\nstrong light beam was directed towards the sensor nodes.\nThe proposed IPA has several steps, as shown in\nAlgorithm 1. The first step is to obtain a third picture Pfilter where\nonly the differences between Pdark and Plight remain. Let us\nassume that Pdark has a resolution of n \u00d7 m, where n is the\nnumber of pixels in a row of the picture, while m is the\nnumber of pixels in a column of the picture. Then Pdark is\ncomposed of n \u00d7 m pixels noted Pdark(i, j), i \u2208 1 \u2264 i \u2264 n,1 \u2264\nj \u2264 m. Similarly Plight is composed of n \u00d7 m pixels noted\nPlight(i, j), 1 \u2264 i \u2264 n,1 \u2264 j \u2264 m.\nEach pixel P is described by an RGB value where the R\nvalue is denoted by PR, the G value is denoted by PG, and\nthe B value is denoted by PB. IPA then generates the third\npicture, Pfilter, through the following transformations:\nPR\nfilter(i, j) = PR\nlight(i, j)\u2212PR\ndark(i, j)\nPG\nfilter(i, j) = PG\nlight(i, j)\u2212PG\ndark(i, j)\nPB\nfilter(i, j) = PB\nlight(i, j)\u2212PB\ndark(i, j)\n(1)\nAfter this transformation, all the features that appeared in\nboth Pdark and Plight are removed from Pfilter. This simplifies\nthe recognition of light retro-reflected by sensor nodes.\nThe second step consists of identifying the elements\ncontained in Pfilter that retro-reflect light. For this, an intensity\nfilter is applied to Pfilter. First IPA converts Pfilter into a\ngrayscale picture. Then the brightest pixels are identified and\nused to create Preflect. This step is eased by the fact that the\nreflecting nodes should appear much brighter than any other\nilluminated object in the picture.\nSupport: Q(\u03bbk)\nni\nP1\n...\nP2\n...\nPN\n\u03bb1\n...\n\u03bbk\n...\n\u03bbN\nFigure 3. Probabilistic label relaxation\nThe third step runs an edge detection algorithm on Preflect\nto identify the boundary of the nodes present. A tool such as\nMatlab provides a number of edge detection techniques. We\nused the bwboundaries function. For the obtained edges, the\nlocation (x,y) (in the image) of each node is determined by\ncomputing the centroid of the points constituting its edges.\nStandard computer graphics techniques [23] are then used\nto transform the 2D locations of sensor nodes detected in\nmultiple images into 3D sensor node locations. The color of\nthe node is obtained as the color of the pixel located at (x,y)\nin Plight.\n3.3 Node ID Matching\nThe goal of the Node ID Matching module is to\nobtain the identity (node ID) of a luminous spot in the\nimage, detected to be a sensor node. For this, we define V =\n{(x1,y1),(x2,y2),...,(xm,ym)} to be the set of locations of\nthe sensor nodes, as detected by the Image Processing\nmodule and \u039b = {\u03bb1,\u03bb2,...,\u03bbm} to be the set of unique node IDs\nassigned to the m sensor nodes, before deployment. From\nhere on, we refer to node IDs as labels.\nWe model the problem of finding the label \u03bbj of a node ni\nas a probabilistic label relaxation problem, frequently used\nin image processing/understanding. In the image processing\ndomain, scene labeling (i.e., identifying objects in an\nimage) plays a major role. The goal of scene labeling is to\nassign a label to each object detected in an image, such that\nan appropriate image interpretation is achieved. It is\nprohibitively expensive to consider the interactions among all\nthe objects in an image. Instead, constraints placed among\nnearby objects generate local consistencies and through\niteration, global consistencies can be obtained.\nThe main idea of the sensor node localization through\nprobabilistic label relaxation is to iteratively compute the\nprobability of each label being the correct label for a\nsensor node, by taking into account, at each iteration, the\nsupport for a label. The support for a label can be understood\nas a hint or proof, that a particular label is more likely to be\nthe correct one, when compared with the other potential\nlabels for a sensor node. We pictorially depict this main idea\nin Figure 3. As shown, node ni has a set of candidate\nlabels {\u03bb1,...,\u03bbk}. Each of the labels has a different value\nfor the Support function Q(\u03bbk). We defer the explanation\nof how the Support function is implemented until the\nsubsections that follow, where we provide four concrete\ntechniques. Formally, the algorithm is outlined in Algorithm 2,\nwhere the equations necessary for computing the new\nprobability Pni(\u03bbk) for a label \u03bbk of a node ni, are expressed by the\n60\nAlgorithm 2 Label Relaxation\n1: for each sensor node ni do\n2: assign equal prob. to all possible labels\n3: end for\n4: repeat\n5: converged \u2190 true\n6: for each sensor node ni do\n7: for each each label \u03bbj of ni do\n8: compute the Support label \u03bbj: Equation 4\n9: end for\n10: compute K for the node ni: Equation 3\n11: for each each label \u03bbj do\n12: update probability of label \u03bbj: Equation 2\n13: if |new prob.\u2212old prob.| \u2265 \u03b5 then\n14: converged \u2190 false\n15: end if\n16: end for\n17: end for\n18: until converged = true\nfollowing equations:\nPs+1\nni\n(\u03bbk) =\n1\nKni\nPs\nni\n(\u03bbk)Qs\nni\n(\u03bbk) (2)\nwhere Kni is a normalizing constant, given by:\nKni =\nN\n\u2211\nk=1\nPs\nni\n(\u03bbk)Qs\nni\n(\u03bbk) (3)\nand Qs\nni\n(\u03bbk) is:\nQs\nni\n(\u03bbk) = support for label \u03bbk of node ni (4)\nThe label relaxation algorithm is iterative and it is\npolynomial in the size of the network(number of nodes). The\npseudo-code is shown in Algorithm 2. It initializes the\nprobabilities associated with each possible label, for a node ni,\nthrough a uniform distribution. At each iteration s, the\nalgorithm updates the probability associated with each label, by\nconsidering the Support Qs\nni\n(\u03bbk) for each candidate label of\na sensor node.\nIn the sections that follow, we describe four different\ntechniques for implementing the Support function: based on\nnode coloring, radio connectivity, the time of deployment\n(time) and the location of deployment (space). While some\nof these techniques are simplistic, they are primitives which,\nwhen combined, can create powerful localization systems.\nThese design techniques have different trade-offs, which we\nwill present in Section 3.3.6.\n3.3.1 Relaxation with Color Constraints\nThe unique mapping between a sensor node\"s position\n(identified by the image processing) and a label can be\nobtained by assigning a unique color to each sensor node. For\nthis we define C = {c1,c2,...,cn} to be the set of unique\ncolors available and M : \u039b \u2192 C to be a one-to-one mapping of\nlabels to colors. This mapping is known prior to the sensor\nnode deployment (from node manufacturing).\nIn the case of color constrained label relaxation, the\nsupport for label \u03bbk is expressed as follows:\nQs\nni\n(\u03bbk) = 1 (5)\nAs a result, the label relaxation algorithm (Algorithm 2)\nconsists of the following steps: one label is assigned to each\nsensor node (lines 1-3 of the algorithm), implicitly having\na probability Pni(\u03bbk) = 1 ; the algorithm executes a single\niteration, when the support function, simply, reiterates the\nconfidence in the unique labeling.\nHowever, it is often the case that unique colors for each\nnode will not be available. It is interesting to discuss here the\ninfluence that the size of the coloring space (i.e., |C|) has on\nthe accuracy of the localization algorithm. Several cases are\ndiscussed below:\n\u2022 If |C| = 0, no colors are used and the sensor nodes are\nequipped with simple CCRs that reflect back all the\nincoming light (i.e., no filtering, and no coloring of the\nincoming light). From the image processing system, the\nposition of sensor nodes can still be obtained. Since\nall nodes appear white, no single sensor node can be\nuniquely identified.\n\u2022 If |C| = m \u2212 1 then there are enough unique colors for\nall nodes (one node remains white, i.e. no coloring), the\nproblem is trivially solved. Each node can be identified,\nbased on its unique color. This is the scenario for the\nrelaxation with color constraints.\n\u2022 If |C| \u2265 1, there are several options for how to\npartition the coloring space. If C = {c1} one possibility is\nto assign the color c1 to a single node, and leave the\nremaining m\u22121 sensor nodes white, or to assign the color\nc1 to more than one sensor node. One can observe that\nonce a color is assigned uniquely to a sensor node, in\neffect, that sensor node is given the status of anchor,\nor node with known location.\nIt is interesting to observe that there is an entire spectrum\nof possibilities for how to partition the set of sensor nodes\nin equivalence classes (where an equivalence class is\nrepresented by one color), in order to maximize the success of the\nlocalization algorithm. One of the goals of this paper is to\nunderstand how the size of the coloring space and its\npartitioning affect localization accuracy.\nDespite the simplicity of this method of constraining the\nset of labels that can be assigned to a node, we will show that\nthis technique is very powerful, when combined with other\nrelaxation techniques.\n3.3.2 Relaxation with Connectivity Constraints\nConnectivity information, obtained from the sensor\nnetwork through beaconing, can provide additional information\nfor locating sensor nodes. In order to gather connectivity\ninformation, the following need to occur: 1) after deployment,\nthrough beaconing of HELLO messages, sensor nodes build\ntheir neighborhood tables; 2) each node sends its neighbor\ntable information to the Central device via a base station.\nFirst, let us define G = (\u039b,E) to be the weighted\nconnectivity graph built by the Central device from the received\nneighbor table information. In G the edge (\u03bbi,\u03bbj) has a\n61\n\u03bb1\n\u03bb2\n...\n\u03bbN\nni nj\ngi2,j2\n\u03bb1\n\u03bb2\n...\n\u03bbN\nPj,\u03bb1\nPj,\u03bb2\n...\nPj,\u03bbN\nPi,\u03bb1\nPi,\u03bb1\n...\nPi,\u03bbN gi2,jm\nFigure 4. Label relaxation with connectivity constraints\nweight gij represented by the number of beacons sent by \u03bbj\nand received by \u03bbi. In addition, let R be the radio range of\nthe sensor nodes.\nThe main idea of the connectivity constrained label\nrelaxation is depicted in Figure 4 in which two nodes ni and\nnj have been assigned all possible labels. The confidence in\neach of the candidate labels for a sensor node, is represented\nby a probability, shown in a dotted rectangle.\nIt is important to remark that through beaconing and the\nreporting of neighbor tables to the Central Device, a global\nview of all constraints in the network can be obtained. It\nis critical to observe that these constraints are among labels.\nAs shown in Figure 4 two constraints exist between nodes ni\nand nj. The constraints are depicted by gi2,j2 and gi2,jM, the\nnumber of beacons sent the labels \u03bbj2 and \u03bbjM and received\nby the label \u03bbi2.\nThe support for the label \u03bbk of sensor node ni, resulting\nfrom the interaction (i.e., within radio range) with sensor\nnode nj is given by:\nQs\nni\n(\u03bbk) =\nM\n\u2211\nm=1\ng\u03bbk\u03bbm\nPs\nnj\n(\u03bbm) (6)\nAs a result, the localization algorithm (Algorithm 3\nconsists of the following steps: all labels are assigned to each\nsensor node (lines 1-3 of the algorithm), and implicitly each\nlabel has a probability initialized to Pni(\u03bbk) = 1/|\u039b|; in each\niteration, the probabilities for the labels of a sensor node are\nupdated, when considering the interaction with the labels of\nsensor nodes within R. It is important to remark that the\nidentity of the nodes within R is not known, only the candidate\nlabels and their probabilities. The relaxation algorithm\nconverges when, during an iteration, the probability of no label\nis updated by more than \u03b5.\nThe label relaxation algorithm based on connectivity\nconstraints, enforces such constraints between pairs of sensor\nnodes. For a large scale sensor network deployment, it is not\nfeasible to consider all pairs of sensor nodes in the network.\nHence, the algorithm should only consider pairs of sensor\nnodes that are within a reasonable communication range (R).\nWe assume a circular radio range and a symmetric\nconnectivity. In the remaining part of the section we propose a\nsimple analytical model that estimates the radio range R for\nmedium-connected networks (less than 20 neighbors per R).\nWe consider the following to be known: the size of the\ndeployment field (L), the number of sensor nodes deployed (N)\nAlgorithm 3 Localization\n1: Estimate the radio range R\n2: Execute the Label Relaxation Algorithm with Support\nFunction given by Equation 6 for neighbors less than R\napart\n3: for each sensor node ni do\n4: node identity is \u03bbk with max. prob.\n5: end for\nand the total number of unidirectional (i.e., not symmetric)\none-hop radio connections in the network (k). For our\nanalysis, we uniformly distribute the sensor nodes in a square area\nof length L, by using a grid of unit length L/\n\u221a\nN. We use the\nsubstitution u = L/\n\u221a\nN to simplify the notation, in order to\ndistinguish the following cases: if u \u2264 R \u2264\n\u221a\n2u each node\nhas four neighbors (the expected k = 4N); if\n\u221a\n2u \u2264 R \u2264 2u\neach node has eight neighbors (the expected k = 8N); if\n2u \u2264 R \u2264\n\u221a\n5u each node has twelve neighbors ( the expected\nk = 12N); if\n\u221a\n5u \u2264 R \u2264 3u each node has twenty neighbors\n(the expected k = 20N)\nFor a given t = k/4N we take R to be the middle of the\ninterval. As an example, if t = 5 then R = (3 +\n\u221a\n5)u/2. A\nquadratic fitting for R over the possible values of t, produces\nthe following closed-form solution for the communication\nrange R, as a function of network connectivity k, assuming L\nand N constant:\nR(k) =\nL\n\u221a\nN\n\u22120.051\nk\n4N\n2\n+0.66\nk\n4N\n+0.6 (7)\nWe investigate the accuracy of our model in Section 5.2.1.\n3.3.3 Relaxation with Time Constraints\nTime constraints can be treated similarly with color\nconstraints. The unique identification of a sensor node can be\nobtained by deploying sensor nodes individually, one by one,\nand recording a sequence of images. The sensor node that is\nidentified as new in the last picture (it was not identified in\nthe picture before last) must be the last sensor node dropped.\nIn a similar manner with color constrained label\nrelaxation, the time constrained approach is very simple, but may\ntake too long, especially for large scale systems. While it\ncan be used in practice, it is unlikely that only a time\nconstrained label relaxation is used. As we will see, by\ncombining constrained-based primitives, realistic localization\nsystems can be implemented.\nThe support function for the label relaxation with time\nconstraints is defined identically with the color constrained\nrelaxation:\nQs\nni\n(\u03bbk) = 1 (8)\nThe localization algorithm (Algorithm 2 consists of the\nfollowing steps: one label is assigned to each sensor node\n(lines 1-3 of the algorithm), and implicitly having a\nprobability Pni(\u03bbk) = 1 ; the algorithm executes a single iteration,\n62\nD1\nD2\nD4\nD\n3Node\nLabel-1\nLabel-2\nLabel-3\nLabel-4\n0.2\n0.1\n0.5\n0.2\nFigure 5. Relaxation with space\nconstraints\n0\n0.2\n0.4\n0.6\n0.8\n1\n1.2\n1.4\n0 1 2 3 4 5 6 7 8\nPDF\nDistance D\n\u03c3 = 0.5\n\u03c3 = 1\n\u03c3 = 2\nFigure 6. Probability distribution of\ndistances\n-4\n-3\n-2\n-1\n0\n1\n2\n3\n4\nX\n-4\n-3\n-2\n-1\n0\n1\n2\n3\n4\nY\n0\n0.2\n0.4\n0.6\n0.8\n1\nNode Density\nFigure 7. Distribution of nodes\nwhen the support function, simply, reiterates the confidence\nin the unique labeling.\n3.3.4 Relaxation with Space Constraints\nSpatial information related to sensor deployment can also\nbe employed as another input to the label relaxation\nalgorithm. To do that, we use two types of locations: the node\nlocation pn and the label location pl. The former pn is defined\nas the position of nodes (xn,yn,zn) after deployment, which\ncan be obtained through Image Processing as mentioned in\nSection 3.3. The latter pl is defined as the location (xl,yl,zl)\nwhere a node is dropped. We use Dni\n\u03bbm\nto denote the\nhorizontal distance between the location of the label \u03bbm and the\nlocation of the node ni. Clearly, Dni\n\u03bbm\n= (xn \u2212xl)2 +(yn \u2212yl)2.\nAt the time of a sensor node release, the one-to-one\nmapping between the node and its label is known. In other words,\nthe label location is the same as the node location at the\nrelease time. After release, the label location information is\npartially lost due to the random factors such as wind and\nsurface impact. However, statistically, the node locations are\ncorrelated with label locations. Such correlation depends on\nthe airdrop methods employed and environments. For the\nsake of simplicity, let\"s assume nodes are dropped from the\nair through a helicopter hovering in the air. Wind can be\ndecomposed into three components X,Y and Z. Only X and\nY affect the horizontal distance a node can travel.\nAccording to [24], we can assume that X and Y follow an\nindependent normal distribution. Therefore, the absolute value of\nthe wind speed follows a Rayleigh distribution. Obviously\nthe higher the wind speed is, the further a node would land\naway horizontally from the label location. If we assume that\nthe distance D is a function of the wind speed V [25] [26],\nwe can obtain the probability distribution of D under a given\nwind speed distribution. Without loss of generality, we\nassume that D is proportional to the wind speed. Therefore,\nD follows the Rayleigh distribution as well. As shown in\nFigure 5, the spatial-based relaxation is a recursive process\nto assign the probability that a nodes has a certain label by\nusing the distances between the location of a node with\nmultiple label locations.\nWe note that the distribution of distance D affects the\nprobability with which a label is assigned. It is not\nnecessarily true that the nearest label is always chosen. For example,\nif D follows the Rayleigh(\u03c32) distribution, we can obtain the\nProbability Density Function (PDF) of distances as shown\nin Figure 6. This figure indicates that the possibility of a\nnode to fall vertically is very small under windy conditions\n(\u03c3 > 0), and that the distance D is affected by the \u03c3. The\nspatial distribution of nodes for \u03c3 = 1 is shown in Figure 7.\nStrong wind with a high \u03c3 value leads to a larger node\ndispersion. More formally, given a probability density function\nPDF(D), the support for label \u03bbk of sensor node ni can be\nformulated as:\nQs\nni\n(\u03bbk) = PDF(Dni\n\u03bbk\n) (9)\nIt is interesting to point out two special cases. First, if all\nnodes are released at once (i.e., only one label location for\nall released nodes), the distance D from a node to all labels\nis the same. In this case, Ps+1\nni\n(\u03bbk) = Ps\nni\n(\u03bbk), which indicates\nthat we can not use the spatial-based relaxation to recursively\nnarrow down the potential labels for a node. Second, if nodes\nare released at different locations that are far away from each\nother, we have: (i) If node ni has label \u03bbk, Ps\nni\n(\u03bbk) \u2192 1 when\ns \u2192 \u221e, (ii) If node ni does not have label \u03bbk, Ps\nni\n(\u03bbk) \u2192 0\nwhen s \u2192 \u221e. In this second scenario, there are multiple\nlabels (one label per release), hence it is possible to correlate\nrelease times (labels) with positions on the ground. These\nresults indicate that spatial-based relaxation can label the node\nwith a very high probability if the physical separation among\nnodes is large.\n3.3.5 Relaxation with Color and Connectivity\nConstraints\nOne of the most interesting features of the StarDust\narchitecture is that it allows for hybrid localization solutions to be\nbuilt, depending on the system requirements. One example\nis a localization system that uses the color and connectivity\nconstraints. In this scheme, the color constraints are used for\nreducing the number of candidate labels for sensor nodes,\nto a more manageable value. As a reminder, in the\nconnectivity constrained relaxation, all labels are candidate labels\nfor each sensor node. The color constraints are used in the\ninitialization phase of Algorithm 3 (lines 1-3). After the\ninitialization, the standard connectivity constrained relaxation\nalgorithm is used.\nFor a better understanding of how the label relaxation\nalgorithm works, we give a concrete example, exemplified in\nFigure 8. In part (a) of the figure we depict the data structures\n63\n11\n8\n4\n1\n12\n9\n7\n5\n3\nni nj\n12\n8\n10\n11\n10\n0.2\n0.2\n0.2\n0.2\n0.2\n0.25\n0.25\n0.25\n0.25\n(a)\n11\n8\n4\n1\n12\n9\n7\n5\n3\nni nj\n12\n8\n10\n11\n10\n0.2\n0.2\n0.2\n0.2\n0.2\n0.32\n0\n0.68\n0\n(b)\nFigure 8. A step through the algorithm. After\ninitialization (a) and after the 1st iteration for node ni (b)\nassociated with nodes ni and nj after the initialization steps\nof the algorithm (lines 1-6), as well as the number of beacons\nbetween different labels (as reported by the network, through\nG(\u039b,E)). As seen, the potential labels (shown inside the\nvertical rectangles) are assigned to each node. Node ni can be\nany of the following: 11,8,4,1. Also depicted in the figure\nare the probabilities associated with each of the labels. After\ninitialization, all probabilities are equal.\nPart (b) of Figure 8 shows the result of the first iteration\nof the localization algorithm for node ni, assuming that node\nnj is the first wi chosen in line 7 of Algorithm 3. By using\nEquation 6, the algorithm computes the support Q(\u03bbi) for\neach of the possible labels for node ni. Once the Q(\u03bbi)\"s\nare computed, the normalizing constant, given by Equation\n3 can be obtained. The last step of the iteration is to update\nthe probabilities associated with all potential labels of node\nni, as given by Equation 2.\nOne interesting problem, which we explore in the\nperformance evaluation section, is to assess the impact the\npartitioning of the color set C has on the accuracy of\nlocalization. When the size of the coloring set is smaller than the\nnumber of sensor nodes (as it is the case for our hybrid\nconnectivity/color constrained relaxation), the system designer\nhas the option of allowing one node to uniquely have a color\n(acting as an anchor), or multiple nodes. Intuitively, by\nassigning one color to more than one node, more constraints\n(distributed) can be enforced.\n3.3.6 Relaxation Techniques Analysis\nThe proposed label relaxation techniques have different\ntrade-offs. For our analysis of the trade-offs, we consider\nthe following metrics of interest: the localization time\n(duration), the energy consumed (overhead), the network size\n(scale) that can be handled by the technique and the\nlocalization accuracy. The parameters of interest are the following:\nthe number of sensor nodes (N), the energy spent for one\naerial drop (\u03b5d), the energy spent in the network for\ncollecting and reporting neighbor information \u03b5b and the time Td\ntaken by a sensor node to reach the ground after being\naerially deployed. The cost comparison of the different label\nrelaxation techniques is shown in Table 1.\nAs shown, the relaxation techniques based on color and\nspace constraints have the lowest localization duration, zero,\nfor all practical purposes. The scalability of the color based\nrelaxation technique is, however, limited to the number of\n(a) (b)\nFigure 9. SensorBall with self-righting capabilities (a)\nand colored CCRs (b)\nunique color filters that can be built. The narrower the\nTransfer Function \u03a8(\u03bb), the larger the number of unique colors\nthat can be created. The manufacturing costs, however, are\nincreasing as well. The scalability issue is addressed by all\nother label relaxation techniques. Most notably, the time\nconstrained relaxation, which is very similar to the\ncolorconstrained relaxation, addresses the scale issue, at a higher\ndeployment cost.\nCriteria Color Connectivity Time Space\nDuration 0 NTb NTd 0\nOverhead \u03b5d \u03b5d +N\u03b5b N\u03b5d \u03b5d\nScale |C| |N| |N| |N|\nAccuracy High Low High Medium\nTable 1. Comparison of label relaxation techniques\n4 System Implementation\nThe StarDust localization framework, depicted in Figure\n2, is flexible in that it enables the development of new\nlocalization systems, based on the four proposed label\nrelaxation schemes, or the inclusion of other, yet to be invented,\nschemes. For our performance evaluation we implemented a\nversion of the StarDust framework, namely the one proposed\nin Section 3.3.5, where the constraints are based on color and\nconnectivity.\nThe Central device of the StarDust system consists of the\nfollowing: the Light Emitter - we used a\ncommon-off-theshelf flash light (QBeam, 3 million candlepower); the\nimage acquisition was done with a 3 megapixel digital camera\n(Sony DSC-S50) which provided the input to the Image\nProcessing algorithm, implemented in Matlab.\nFor sensor nodes we built a custom sensor node, called\nSensorBall, with self-righting capabilities, shown in Figure\n9(a). The self-righting capabilities are necessary in order to\norient the CCR predominantly upwards. The CCRs that we\nused were inexpensive, plastic molded, night time warning\nsigns commonly available on bicycles, as shown in Figure\n9(b). We remark here the low quality of the CCRs we used.\nThe reflectivity of each CCR (there are tens molded in the\nplastic container) is extremely low, and each CCR is not built\nwith mirrors. A reflective effect is achieved by employing\nfinely polished plastic surfaces. We had 5 colors available,\nin addition to the standard CCR, which reflects all the\nincoming light (white CCR). For a slightly higher price (ours\nwere 20cents/piece), better quality CCRs can be employed.\n64\nFigure 10. The field in the dark Figure 11. The illuminated field Figure 12. The difference: Figure\n10Figure 11\nHigher quality (better mirrors) would translate in more\naccurate image processing (better sensor node detection) and\nsmaller form factor for the optical component (an array of\nCCRs with a smaller area can be used).\nThe sensor node platform we used was the micaZ mote.\nThe code that runs on each node is a simple application\nwhich broadcasts 100 beacons, and maintains a neighbor\ntable containing the percentage of successfully received\nbeacons, for each neighbor. On demand, the neighbor table is\nreported to a base station, where the node ID mapping is\nperformed.\n5 System Evaluation\nIn this section we present the performance evaluation of\na system implementation of the StarDust localization\nframework. The three major research questions that our evaluation\ntries to answer are: the feasibility of the proposed framework\n(can sensor nodes be optically detected at large distances),\nthe localization accuracy of one actual implementation of the\nStarDust framework, and whether or not atmospheric\nconditions can affect the recognition of sensor nodes in an\nimage. The first two questions are investigated by evaluating\nthe two main components of the StarDust framework: the\nImage Processing and the Node ID Matching. These\ncomponents have been evaluated separately mainly because of\nlack of adequate facilities. We wanted to evaluate the\nperformance of the Image Processing Algorithm in a long range,\nrealistic, experimental set-up, while the Node ID Matching\nrequired a relatively large area, available for long periods of\ntime (for connectivity data gathering). The third research\nquestion is investigated through a computer modeling of\natmospheric phenomena.\nFor the evaluation of the Image Processing module, we\nperformed experiments in a football stadium where we\ndeploy 6 sensor nodes in a 3\u00d72 grid. The distance between the\nCentral device and the sensor nodes is approximately 500 ft.\nThe metrics of interest are the number of false positives and\nfalse negatives in the Image Processing Algorithm.\nFor the evaluation of the Node ID Mapping component,\nwe deploy 26 sensor nodes in an 120 \u00d7 60 ft2 flat area of\na stadium. In order to investigate the influence the radio\nconnectivity has on localization accuracy, we vary the height\nabove ground of the deployed sensor nodes. Two set-ups are\nused: one in which the sensor nodes are on the ground, and\nthe second one, in which the sensor nodes are raised 3 inches\nabove ground. From here on, we will refer to these two\nexperimental set-ups as the low connectivity and the high\nconnectivity networks, respectively because when nodes are\non the ground the communication range is low resulting in\nless neighbors than when the nodes are elevated and have a\ngreater communication range. The metrics of interest are:\nthe localization error (defined as the distance between the\ncomputed location and the true location - known from the\nmanual placement), the percentage of nodes correctly\nlocalized, the convergence of the label relaxation algorithm, the\ntime to localize and the robustness of the node ID mapping\nto errors in the Image Processing module.\nThe parameters that we vary experimentally are: the\nangle under which images are taken, the focus of the camera,\nand the degree of connectivity. The parameters that we vary\nin simulations (subsequent to image acquisition and\nconnectivity collection) the number of colors, the number of\nanchors, the number of false positives or negatives as input\nto the Node ID Matching component, the distance between\nthe imaging device and sensor network (i.e., range),\natmospheric conditions (light attenuation coefficient) and CCR\nreflectance (indicative of its quality).\n5.1 Image Processing\nFor the IPA evaluation, we deploy 6 sensor nodes in a\n3 \u00d7 2 grid. We take 13 sets of pictures using different\norientations of the camera and different zooming factors. All\npictures were taken from the same location. Each set is\ncomposed of a picture taken in the dark and of a picture taken\nwith a light beam pointed at the nodes. We process the\npictures offline using a Matlab implementation of IPA. Since we\nare interested in the feasibility of identifying colored sensor\nnodes at large distance, the end result of our IPA is the 2D\nlocation of sensor nodes (position in the image). The\ntransformation to 3D coordinates can be done through standard\ncomputer graphics techniques [23].\nOne set of pictures obtained as part of our experiment is\nshown in Figures 10 and 11. The execution of our IPA\nalgorithm results in Figure 12 which filters out the background,\nand Figure 13 which shows the output of the edge detection\nstep of IPA. The experimental results are depicted in\nFigure 14. For each set of pictures the graph shows the number\nof false positives (the IPA determines that there is a node\n65\nFigure 13. Retroreflectors detected in Figure 12\n0\n1\n2\n3\n1 2 3 4 5 6 7 8 9 10 11\nExperiment Number\nCount\nFalse Positives\nFalse Negatives\nFigure 14. False Positives and Negatives for the 6 nodes\nwhile there is none), and the number of false negatives (the\nIPA determines that there is no node while there is one). In\nabout 45% of the cases, we obtained perfect results, i.e., no\nfalse positives and no false negatives. In the remaining cases,\nwe obtained a number of false positives of at most one, and\na number of false negatives of at most two.\nWe exclude two pairs of pictures from Figure 14. In the\nfirst excluded pair, we obtain 42 false positives and in the\nsecond pair 10 false positives and 7 false negatives. By\ncarefully examining the pictures, we realized that the first pair\nwas taken out of focus and that a car temporarily appeared\nin one of the pictures of the second pair. The anomaly in\nthe second set was due to the fact that we waited too long to\ntake the second picture. If the pictures had been taken a few\nmilliseconds apart, the car would have been represented on\neither both or none of the pictures and the IPA would have\nfiltered it out.\n5.2 Node ID Matching\nWe evaluate the Node ID Matching component of our\nsystem by collecting empirical data (connectivity information)\nfrom the outdoor deployment of 26 nodes in the 120\u00d760 ft2\narea. We collect 20 sets of data for the high connectivity\nand low connectivity network deployments. Off-line we\ninvestigate the influence of coloring on the metrics of interest,\nby randomly assigning colors to the sensor nodes. For one\nexperimental data set we generate 50 random assignments\nof colors to sensor nodes. It is important to observe that, for\nthe evaluation of the Node ID Matching algorithm (color and\nconnectivity constrained), we simulate the color assignment\nto sensor nodes. As mentioned in Section 4 the size of the\ncoloring space available to us was 5 (5 colors). Through\nsimulations of color assignment (not connectivity) we are able\nto investigate the influence that the size of the coloring space\nhas on the accuracy of localization. The value of the\nparam0\n10\n20\n30\n40\n50\n60\n0 10 20 30 40 50 60 70 80 90\nDistance [feet]\nCount\nConnected\nNot Connected\nFigure 15. The number of existing and missing radio\nconnections in the sparse connectivity experiment\n0\n10\n20\n30\n40\n50\n60\n70\n0 10 20 30 40 50 60 70 80 90 10 11 12\nDistance [feet]\nCount\nConnected\nNot Connected\nFigure 16. The number of existing and missing radio\nconnections in the high connectivity experiment\neter \u03b5 used in Algorithm 2 was 0.001. The results presented\nhere represent averages over the randomly generated\ncolorings and over all experimental data sets.\nWe first investigate the accuracy of our proposed Radio\nModel, and subsequently use the derived values for the radio\nrange in the evaluation of the Node ID matching component.\n5.2.1 Radio Model\nFrom experiments, we obtain the average number of\nobserved beacons (k, defined in Section 3.3.2) for the low\nconnectivity network of 180 beacons and for the high\nconnectivity network of 420 beacons. From our Radio Model\n(Equation 7, we obtain a radio range R = 25 ft for the low\nconnectivity network and R = 40 ft for the high connectivity\nnetwork.\nTo estimate the accuracy of our simple model, we plot\nthe number of radio links that exist in the networks, and the\nnumber of links that are missing, as functions of the distance\nbetween nodes. The results are shown in Figures 15 and 16.\nWe define the average radio range R to be the distance over\nwhich less than 20% of potential radio links, are missing.\nAs shown in Figure 15, the radio range is between 20 ft and\n25 ft. For the higher connectivity network, the radio range\nwas between 30 ft and 40 ft.\nWe choose two conservative estimates of the radio range:\n20 ft for the low connectivity case and 35 ft for the high\nconnectivity case, which are in good agreement with the values\npredicted by our Radio Model.\n5.2.2 Localization Error vs. Coloring Space Size\nIn this experiment we investigate the effect of the number\nof colors on the localization accuracy. For this, we randomly\nassign colors from a pool of a given size, to the sensor nodes.\n66\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n0 5 10 15 20\nNumber of Colors\nLocalizationError[feet]\nR=15feet\nR=20feet\nR=25feet\nFigure 17. Localization error\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0 5 10 15 20\nNumber of Colors\n%CorrectLocalized[x100]\nR=15feet\nR=20feet\nR=25feet\nFigure 18. Percentage of nodes correctly localized\nWe then execute the localization algorithm, which uses the\nempirical data. The algorithm is run for three different radio\nranges: 15, 20 and 25 ft, to investigate its influence on the\nlocalization error.\nThe results are depicted in Figure 17 (localization error)\nand Figure 18 (percentage of nodes correctly localized). As\nshown, for an estimate of 20 ft for the radio range (as\npredicted by our Radio Model) we obtain the smallest\nlocalization errors, as small as 2 ft, when enough colors are used.\nBoth Figures 17 and 18 confirm our intuition that a larger\nnumber of colors available significantly decrease the error in\nlocalization.\nThe well known fact that relaxation algorithms do not\nalways converge, was observed during our experiments. The\npercentage of successful runs (when the algorithm\nconverged) is depicted in Figure 19. As shown, in several\nsituations, the algorithm failed to converge (the algorithm\nexecution was stopped after 100 iterations per node). If the\nalgorithm does not converge in a predetermined number of steps,\nit will terminate and the label with the highest probability\nwill provide the identity of the node. It is very probable that\nthe chosen label is incorrect, since the probabilities of some\nof labels are constantly changing (with each iteration).The\nconvergence of relaxation based algorithms is a well known\nissue.\n5.2.3 Localization Error vs. Color Uniqueness\nAs mentioned in the Section 3.3.1, a unique color gives a\nsensor node the statute of an anchor. A sensor node that is\nan anchor can unequivocally be identified through the Image\nProcessing module. In this section we investigate the effect\nunique colors have on the localization accuracy. Specifically,\nwe want to experimentally verify our intuition that assigning\nmore nodes to a color can benefit the localization accuracy,\nby enforcing more constraints, as opposed to uniquely\nassigning a color to a single node.\n90\n95\n100\n105\n0 5 10 15 20\nNumber of Colors\nConvergenceRate[x100]\nR=15feet\nR=20feet\nR=25feet\nFigure 19. Convergence error\n0\n2\n4\n6\n8\n10\n12\n14\n16\n4 6 8\nNumber of Colors\nLocalizationError[feet]\n0 anchors\n2 anchors\n4 anchors\n6 anchors\n8 anchors\nFigure 20. Localization error vs. number of colors\nFor this, we fix the number of available colors to either 4,\n6 or 8 and vary the number of nodes that are given unique\ncolors, from 0, up to the maximum number of colors (4, 6 or\n8). Naturally, if we have a maximum number of colors of 4,\nwe can assign at most 4 anchors. The experimental results\nare depicted in Figure 20 (localization error) and Figure 21\n(percentage of sensor node correctly localized). As expected,\nthe localization accuracy increases with the increase in the\nnumber of colors available (larger coloring space). Also, for\na given size of the coloring space (e.g., 6 colors available), if\nmore colors are uniquely assigned to sensor nodes then the\nlocalization accuracy decreases. It is interesting to observe\nthat by assigning colors uniquely to nodes, the benefit of\nhaving additional colors is diminished. Specifically, if 8 colors\nare available and all are assigned uniquely, the system would\nbe less accurately localized (error \u2248 7 ft), when compared\nto the case of 6 colors and no unique assignments of colors\n(\u2248 5 ft localization error).\nThe same trend, of a less accurate localization can be\nobserved in Figure 21, which shows the percentage of nodes\ncorrectly localized (i.e., 0 ft localization error). As shown, if\nwe increase the number of colors that are uniquely assigned,\nthe percentage of nodes correctly localized decreases.\n5.2.4 Localization Error vs. Connectivity\nWe collected empirical data for two network deployments\nwith different degrees of connectivity (high and low) in\norder to assess the influence of connectivity on location\naccuracy. The results obtained from running our localization\nalgorithm are depicted in Figure 22 and Figure 23. We\nvaried the number of colors available and assigned no anchors\n(i.e., no unique assignments of colors).\nIn both scenarios, as expected, localization error decrease\nwith an increase in the number of colors. It is interesting\nto observe, however, that the low connectivity scenario\nim67\n0\n20\n40\n60\n80\n100\n120\n140\n4 6 8\nNumber of Colors\n%CorrectLocalized[x100]\n0 anchors\n2 anchors\n4 anchors\n6 anchors\n8 anchors\nFigure 21. Percentage of nodes correctly localized vs.\nnumber of colors\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n0 2 4 6 8 10 12\nNumber of Colors\nLocalizationError[feet]\nLow Connectivity\nHigh Connectivity\nFigure 22. Localization error vs. number of colors\nproves the localization accuracy quicker, from the additional\nnumber of colors available. When the number of colors\nbecomes relatively large (twelve for our 26 sensor node\nnetwork), both scenarios (low and high connectivity) have\ncomparable localization errors, of less that 2 ft. The same trend\nof more accurate location information is evidenced by\nFigure 23 which shows that the percentage of nodes that are\nlocalized correctly grows quicker for the low connectivity\ndeployment.\n5.3 Localization Error vs. Image Processing\nErrors\nSo far we investigated the sources for error in\nlocalization that are intrinsic to the Node ID Matching component.\nAs previously presented, luminous objects can be\nmistakenly detected to be sensor nodes during the location\ndetection phase of the Image Processing module. These false\npositives can be eliminated by the color recognition procedure\nof the Image Processing module. More problematic are false\nnegatives (when a sensor node does not reflect back enough\nlight to be detected). They need to be handled by the\nlocalization algorithm. In this case, the localization algorithm\nis presented with two sets of nodes of different sizes, that\nneed to be matched: one coming from the Image Processing\n(which misses some nodes) and one coming from the\nnetwork, with the connectivity information (here we assume a\nfully connected network, so that all sensor nodes report their\nconnectivity information). In this experiment we investigate\nhow Image Processing errors (false negatives) influence the\nlocalization accuracy.\nFor this evaluation, we ran our localization algorithm with\nempirical data, but dropped a percentage of nodes from the\nlist of nodes detected by the Image Processing algorithm (we\nartificially introduced false negatives in the Image\nProcess0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\n0 2 4 6 8 10 12\nNumber of Colors\n%CorrectLocalized[x100]\nLow Connectivity\nHigh Connectivity\nFigure 23. Percentage of nodes correctly localized\n0\n2\n4\n6\n8\n10\n12\n14\n0 4 8 12 16\n% False Negatives [x100]\nLocalizationError[feet]\n4 colors\n8 colors\n12 colors\nFigure 24. Impact of false negatives on the localization\nerror\ning). The effect of false negatives on localization accuracy is\ndepicted in Figure 24. As seen in the figure if the number of\nfalse negatives is 15%, the error in position estimation\ndoubles when 4 colors are available. It is interesting to observe\nthat the scenario when more colors are available (e.g., 12\ncolors) is being affected more drastically than the scenario with\nless colors (e.g., 4 colors). The benefit of having more colors\navailable is still being maintained, at least for the range of\ncolors we investigated (4 through 12 colors).\n5.4 Localization Time\nIn this section we look more closely at the duration for\neach of the four proposed relaxation techniques and two\ncombinations of them: color-connectivity and color-time.\nWe assume that 50 unique color filters can be manufactured,\nthat the sensor network is deployed from 2,400 ft\n(necessary for the time-constrained relaxation) and that the time\nrequired for reporting connectivity grows linearly, with an\ninitial reporting period of 160sec, as used in a real world\ntracking application [1]. The localization duration results, as\npresented in Table 1, are depicted in Figure 25.\nAs shown, for all practical purposes the time required\nby the space constrained relaxation techniques is 0sec. The\nsame applies to the color constrained relaxation, for which\nthe localization time is 0sec (if the number of colors is\nsufficient). Considering our assumptions, only for a network of\nsize 50 the color constrained relaxation works. The\nlocalization duration for all other network sizes (100, 150 and 200)\nis infinite (i.e., unique color assignments to sensor nodes\ncan not be made, since only 50 colors are unique), when\nonly color constrained relaxation is used. Both the\nconnectivity constrained and time constrained techniques increase\nlinearly with the network size (for the time constrained, the\nCentral device deploys sensor nodes one by one, recording\nan image after the time a sensor node is expected to reach the\n68\n0\n500\n1000\n1500\n2000\n2500\n3000\nColor Connectivity Time Space\nColorConenctivity\nColor-Time\nLocalization technique\nLocalizationtime[sec]\n50 nodes\n100 nodes\n150 nodes\n200 nodes\nFigure 25. Localization time for different\nlabel relaxation schemes\n0 2000 4000 6000 8000\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\nr [feet]\nC\nr\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nFigure 26. Apparent contrast in a\nclear atmosphere\n0 2000 4000 6000 8000\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\nr [feet]\nC\nr\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1.0\nFigure 27. Apparent contrast in a\nhazing atmosphere\nground).\nIt is interesting to notice in Figure 25 the improvement in\nthe localization time obtained by simply combining the color\nand the connectivity constrained techniques. The\nlocalization duration in this case is identical with the connectivity\nconstrained technique.\nThe combination of color and time constrained\nrelaxations is even more interesting. For a reasonable\nlocalization duration of 52seconds a perfect (i.e., 0 ft localization\nerror) localization system can be built. In this scenario, the\nset of sensor nodes is split in batches, with each batch\nhaving a set of unique colors. It would be very interesting to\nconsider other scenarios, where the strength of the space\nconstrained relaxation (0sec for any sensor network size) is\nused for improving the other proposed relaxation techniques.\nWe leave the investigation and rigorous classification of such\ntechnique combination for future work.\n5.5 System Range\nIn this section we evaluate the feasibility of the\nStarDust localization framework when considering the realities\nof light propagation through the atmosphere.\nThe main factor that determines the range of our system is\nlight scattering, which redirects the luminance of the source\ninto the medium (in essence equally affecting the luminosity\nof the target and of the background). Scattering limits the\nvisibility range by reducing the apparent contrast between\nthe target and its background (approaches zero, as the\ndistance increases). The apparent contrast Cr is quantitatively\nexpressed by the formula:\nCr = (Nt\nr \u2212Nb\nr )/Nb\nr (10)\nwhere Nt\nr and Nb\nr are the apparent target radiance and\napparent background radiance at distance r from the light source,\nrespectively. The apparent radiance Nt\nr of a target at a\ndistance r from the light source, is given by:\nNt\nr = Na +\nI\u03c1te\u22122\u03c3r\n\u03c0r2\n(11)\nwhere I is the intensity of the light source, \u03c1t is the\ntarget reflectance, \u03c3 is the spectral attenuation coefficient (\u2248\n0.12km\u22121 and \u2248 0.60km\u22121 for a clear and a hazy\natmosphere, respectively) and Na is the radiance of the\natmospheric backscatter, and it can be expressed as follows:\nNa =\nG\u03c32I\n2\u03c0\n2\u03c3rZ\n0.02\u03c3r\ne\u2212x\nx2\ndx (12)\nwhere G = 0.24 is a backscatter gain. The apparent\nbackground radiance Nb\nr is given by formulas similar with\nEquations 11 and 12, where only the target reflectance \u03c1t is\nsubstituted with the background reflectance \u03c1b. It is important\nto remark that when Cr reaches its lower limit, no increase\nin the source luminance or receiver sensitivity will increase\nthe range of the system. From Equations 11 and 12 it can be\nobserved that the parameter which can be controlled and can\ninfluence the range of the system is \u03c1t, the target reflectance.\nFigures 26 and 27 depict the apparent contrast Cr as a\nfunction of the distance r for a clear and for a hazy\natmosphere, respectively. The apparent contrast is investigated for\nreflectance coefficients \u03c1t ranging from 0.3 to 1.0 (perfect\nreflector). For a contrast C of at least 0.5, as it can be seen in\nFigure 26 a range of approximately 4,500 ft can be achieved\nif the atmosphere is clear. The performance dramatically\ndeteriorates, when the atmospheric conditions are problematic.\nAs shown in Figure 27 a range of up to 1,500 ft is\nachievable, when using highly reflective CCR components.\nWhile our light source (3 million candlepower) was\nsufficient for a range of a few hundred feet, we remark that there\nexist commercially available light sources (20 million\ncandlepower) or military (150 million candlepower [27]),\npowerful enough for ranges of a few thousand feet.\n6 StarDust System Optimizations\nIn this section we describe extensions of the proposed\narchitecture that can constitute future research directions.\n6.1 Chained Constraint Primitives\nIn this paper we proposed four primitives for\nconstraintbased relaxation algorithms: color, connectivity, time and\nspace. To demonstrate the power that can be obtained by\ncombining them, we proposed and evaluated one\ncombination of such primitives: color and connectivity. An\ninteresting research direction to pursue could be to chain more than\ntwo of these primitives. An example of such chain is: color,\ntemporal, spatial and connectivity. Other research directions\ncould be to use voting scheme for deciding which primitive\nto use or assign different weights to different relaxation\nalgorithms.\n69\n6.2 Location Learning\nIf after several iterations of the algorithm, none of the\nlabel probabilities for a node ni converges to a higher value, the\nconfidence in our labeling of that node is relatively low. It\nwould be interesting to associate with a node, more than one\nlabel (implicitly more than one location) and defer the label\nassignment decision until events are detected in the network\n(if the network was deployed for target tracking).\n6.3 Localization in Rugged Environments\nThe initial driving force for the StarDust localization\nframework was to address the sensor node localization in\nextremely rugged environments. Canopies, dense vegetation,\nextremely obstructing environments pose significant\nchallenges for sensor nodes localization. The hope, and our\noriginal idea, was to consider the time period between the aerial\ndeployment and the time when the sensor node disappears\nunder the canopy. By recording the last visible position of a\nsensor node (as seen from the aircraft) a reasonable estimate\nof the sensor node location can be obtained. This would\nrequire that sensor nodes posses self-righting capabilities,\nwhile in mid-air. Nevertheless, we remark on the suitability\nof our localization framework for rugged, non-line-of-sight\nenvironments.\n7 Conclusions\nStarDust solves the localization problem for aerial\ndeployments where passiveness, low cost, small form factor\nand rapid localization are required. Results show that\naccuracy can be within 2 ft and localization time within\nmilliseconds. StarDust also shows robustness with respect to errors.\nWe predict the influence the atmospheric conditions can have\non the range of a system based on the StarDust framework,\nand show that hazy environments or daylight can pose\nsignificant challenges.\nMost importantly, the properties of StarDust support\nthe potential for even more accurate localization solutions\nas well as solutions for rugged, non-line-of-sight\nenvironments.\n8 References\n[1] T. He, S. Krishnamurthy, J. A. Stankovic, T. Abdelzaher, L. Luo,\nR. Stoleru, T. Yan, L. Gu, J. Hui, and B. Krogh, An energy-efficient\nsurveillance system using wireless sensor networks, in MobiSys, 2004.\n[2] G. Simon, M. Maroti, A. Ledeczi, G. Balogh, B. Kusy, A. Nadas,\nG. Pap, J. Sallai, and K. Frampton, Sensor network-based\ncountersniper system, in SenSys, 2004.\n[3] A. Arora, P. Dutta, and B. Bapat, A line in the sand: A wireless sensor\nnetwork for trage detection, classification and tracking, in Computer\nNetworks, 2004.\n[4] R. Szewczyk, A. Mainwaring, J. Polastre, J. Anderson, and D. Culler,\nAn analysis of a large scale habitat monitoring application, in ACM\nSenSys, 2004.\n[5] N. Xu, S. Rangwala, K. K. Chintalapudi, D. Ganesan, A. Broad,\nR. Govindan, and D. Estrin, A wireless sensor network for structural\nmonitoring, in ACM SenSys, 2004.\n[6] A. Savvides, C. Han, and M. Srivastava, Dynamic fine-grained\nlocalization in ad-hoc networks of sensors, in Mobicom, 2001.\n[7] N. Priyantha, A. Chakraborty, and H. Balakrishnan, The cricket\nlocation-support system, in Mobicom, 2000.\n[8] M. Broxton, J. Lifton, and J. Paradiso, Localizing a sensor network\nvia collaborative processing of global stimuli, in EWSN, 2005.\n[9] P. Bahl and V. N. Padmanabhan, Radar: An in-building rf-based user\nlocation and tracking system, in IEEE Infocom, 2000.\n[10] N. Priyantha, H. Balakrishnan, E. Demaine, and S. Teller,\nMobileassisted topology generation for auto-localization in sensor networks,\nin IEEE Infocom, 2005.\n[11] P. N. Pathirana, A. Savkin, S. Jha, and N. Bulusu, Node localization\nusing mobile robots in delay-tolerant sensor networks, IEEE\nTransactions on Mobile Computing, 2004.\n[12] C. Savarese, J. M. Rabaey, and J. Beutel, Locationing in distribued\nad-hoc wireless sensor networks, in ICAASSP, 2001.\n[13] M. Maroti, B. Kusy, G. Balogh, P. Volgyesi, A. Nadas, K. Molnar,\nS. Dora, and A. Ledeczi, Radio interferometric geolocation, in ACM\nSenSys, 2005.\n[14] K. Whitehouse, A. Woo, C. Karlof, F. Jiang, and D. Culler, The\neffects of ranging noise on multi-hop localization: An empirical study,\nin IPSN, 2005.\n[15] Y. Kwon, K. Mechitov, S. Sundresh, W. Kim, and G. Agha, Resilient\nlocalization for sensor networks in outdoor environment, UIUC, Tech.\nRep., 2004.\n[16] R. Stoleru and J. A. Stankovic, Probability grid: A location\nestimation scheme for wireless sensor networks, in SECON, 2004.\n[17] N. Bulusu, J. Heidemann, and D. Estrin, GPS-less low cost outdoor\nlocalization for very small devices, IEEE Personal Communications\nMagazine, 2000.\n[18] T. He, C. Huang, B. Blum, J. A. Stankovic, and T. Abdelzaher,\nRange-Free localization schemes in large scale sensor networks, in\nACM Mobicom, 2003.\n[19] R. Nagpal, H. Shrobe, and J. Bachrach, Organizing a global\ncoordinate system from local information on an ad-hoc sensor network, in\nIPSN, 2003.\n[20] D. Niculescu and B. Nath, ad-hoc positioning system, in IEEE\nGLOBECOM, 2001.\n[21] R. Stoleru, T. He, J. A. Stankovic, and D. Luebke, A high-accuracy\nlow-cost localization system for wireless sensor networks, in ACM\nSenSys, 2005.\n[22] K. R\u00a8omer, The lighthouse location system for smart dust, in\nACM/USENIX MobiSys, 2003.\n[23] R. Y. Tsai, A versatile camera calibration technique for\nhighaccuracy 3d machine vision metrology using off-the-shelf tv cameras\nand lenses, IEEE JRA, 1987.\n[24] C. L. Archer and M. Z. Jacobson, Spatial and temporal distributions\nof U.S. winds and wind power at 80m derived from measurements,\nGeophysical Research Jrnl., 2003.\n[25] Team for advanced flow simulation and modeling. [Online].\nAvailable: http://www.mems.rice.edu/TAFSM/RES/\n[26] K. Stein, R. Benney, T. Tezduyar, V. Kalro, and J. Leonard, 3-D\ncomputation of parachute fluid-structure interactions - performance and\ncontrol, in Aerodynamic Decelerator Systems Conference, 1999.\n[27] Headquarters Department of the Army, Technical manual for\nsearchlight infrared AN/GSS-14(V)1, 1982.\n70", "keywords": "range;unique mapping;performance;image processing;connectivity;localization;scene labeling;probability;sensor node;aerial vehicle;consistency;wireless sensor network;corner-cube retro-reflector"} {"name": "train_C-46", "title": "TSAR: A Two Tier Sensor Storage Architecture Using Interval Skip Graphs", "abstract": "Archival storage of sensor data is necessary for applications that query, mine, and analyze such data for interesting features and trends. We argue that existing storage systems are designed primarily for flat hierarchies of homogeneous sensor nodes and do not fully exploit the multi-tier nature of emerging sensor networks, where an application can comprise tens of tethered proxies, each managing tens to hundreds of untethered sensors. We present TSAR, a fundamentally different storage architecture that envisions separation of data from metadata by employing local archiving at the sensors and distributed indexing at the proxies. At the proxy tier, TSAR employs a novel multi-resolution ordered distributed index structure, the Interval Skip Graph, for efficiently supporting spatio-temporal and value queries. At the sensor tier, TSAR supports energy-aware adaptive summarization that can trade off the cost of transmitting metadata to the proxies against the overhead of false hits resulting from querying a coarse-grain index. We implement TSAR in a two-tier sensor testbed comprising Stargatebased proxies and Mote-based sensors. Our experiments demonstrate the benefits and feasibility of using our energy-efficient storage architecture in multi-tier sensor networks.", "fulltext": "1. Introduction\n1.1 Motivation\nMany different kinds of networked data-centric sensor\napplications have emerged in recent years. Sensors in these applications\nsense the environment and generate data that must be processed,\nfiltered, interpreted, and archived in order to provide a useful\ninfrastructure to its users. To achieve its goals, a typical sensor\napplication needs access to both live and past sensor data. Whereas\naccess to live data is necessary in monitoring and surveillance\napplications, access to past data is necessary for applications such as\nmining of sensor logs to detect unusual patterns, analysis of\nhistorical trends, and post-mortem analysis of particular events. Archival\nstorage of past sensor data requires a storage system, the key\nattributes of which are: where the data is stored, whether it is indexed,\nand how the application can access this data in an energy-efficient\nmanner with low latency.\nThere have been a spectrum of approaches for constructing\nsensor storage systems. In the simplest, sensors stream data or events\nto a server for long-term archival storage [3], where the server\noften indexes the data to permit efficient access at a later time. Since\nsensors may be several hops from the nearest base station, network\ncosts are incurred; however, once data is indexed and archived,\nsubsequent data accesses can be handled locally at the server without\nincurring network overhead. In this approach, the storage is\ncentralized, reads are efficient and cheap, while writes are expensive.\nFurther, all data is propagated to the server, regardless of whether\nit is ever used by the application.\nAn alternate approach is to have each sensor store data or events\nlocally (e.g., in flash memory), so that all writes are local and incur\nno communication overheads. A read request, such as whether an\nevent was detected by a particular sensor, requires a message to\nbe sent to the sensor for processing. More complex read requests\nare handled by flooding. For instance, determining if an intruder\nwas detected over a particular time interval requires the request to\nbe flooded to all sensors in the system. Thus, in this approach,\nthe storage is distributed, writes are local and inexpensive, while\nreads incur significant network overheads. Requests that require\nflooding, due to the lack of an index, are expensive and may waste\nprecious sensor resources, even if no matching data is stored at\nthose sensors. Research efforts such as Directed Diffusion [17]\nhave attempted to reduce these read costs, however, by intelligent\nmessage routing.\nBetween these two extremes lie a number of other sensor storage\nsystems with different trade-offs, summarized in Table 1. The\ngeographic hash table (GHT) approach [24, 26] advocates the use of\nan in-network index to augment the fully distributed nature of\nsensor storage. In this approach, each data item has a key associated\nwith it, and a distributed or geographic hash table is used to map\nkeys to nodes that store the corresponding data items. Thus, writes\ncause data items to be sent to the hashed nodes and also trigger\nupdates to the in-network hash table. A read request requires a lookup\nin the in-network hash table to locate the node that stores the data\n39\nitem; observe that the presence of an index eliminates the need for\nflooding in this approach.\nMost of these approaches assume a flat, homogeneous\narchitecture in which every sensor node is energy-constrained. In this\npaper, we propose a novel storage architecture called TSAR1\nthat\nreflects and exploits the multi-tier nature of emerging sensor\nnetworks, where the application is comprised of tens of tethered\nsensor proxies (or more), each controlling tens or hundreds of\nuntethered sensors. TSAR is a component of our PRESTO [8] predictive\nstorage architecture, which combines archival storage with caching\nand prediction. We believe that a fundamentally different storage\narchitecture is necessary to address the multi-tier nature of future\nsensor networks. Specifically, the storage architecture needs to\nexploit the resource-rich nature of proxies, while respecting resource\nconstraints at the remote sensors. No existing sensor storage\narchitecture explicitly addresses this dichotomy in the resource\ncapabilities of different tiers.\nAny sensor storage system should also carefully exploit current\ntechnology trends, which indicate that the capacities of flash\nmemories continue to rise as per Moore\"s Law, while their costs continue\nto plummet. Thus it will soon be feasible to equip each sensor with\n1 GB of flash storage for a few tens of dollars. An even more\ncompelling argument is the energy cost of flash storage, which can be\nas much as two orders of magnitude lower than that for\ncommunication. Newer NAND flash memories offer very low write and\nerase energy costs - our comparison of a 1GB Samsung NAND\nflash storage [16] and the Chipcon CC2420 802.15.4 wireless radio\n[4] in Section 6.2 indicates a 1:100 ratio in per-byte energy cost\nbetween the two devices, even before accounting for network protocol\noverheads. These trends, together with the energy-constrained\nnature of untethered sensors, indicate that local storage offers a viable,\nenergy-efficient alternative to communication in sensor networks.\nTSAR exploits these trends by storing data or events locally on\nthe energy-efficient flash storage at each sensor. Sensors send\nconcise identifying information, which we term metadata, to a nearby\nproxy; depending on the representation used, this metadata may be\nan order of magnitude or more smaller than the data itself,\nimposing much lower communication costs. The resource-rich proxies\ninteract with one another to construct a distributed index of the\nmetadata reported from all sensors, and thus an index of the\nassociated data stored at the sensors. This index provides a unified,\nlogical view of the distributed data, and enables an application to\nquery and read past data efficiently - the index is used to\npinpoint all data that match a read request, followed by messages to\nretrieve that data from the corresponding sensors. In-network\nindex lookups are eliminated, reducing network overheads for read\nrequests. This separation of data, which is stored at the sensors,\nand the metadata, which is stored at the proxies, enables TSAR to\nreduce energy overheads at the sensors, by leveraging resources at\ntethered proxies.\n1.2 Contributions\nThis paper presents TSAR, a novel two-tier storage architecture\nfor sensor networks. To the best of our knowledge, this is the first\nsensor storage system that is explicitly tailored for emerging\nmultitier sensor networks. Our design and implementation of TSAR has\nresulted in four contributions.\nAt the core of the TSAR architecture is a novel distributed index\nstructure based on interval skip graphs that we introduce in this\npaper. This index structure can store coarse summaries of sensor\ndata and organize them in an ordered manner to be easily\nsearch1\nTSAR: Tiered Storage ARchitecture for sensor networks.\nable. This data structure has O(log n) expected search and update\ncomplexity. Further, the index provides a logically unified view of\nall data in the system.\nSecond, at the sensor level, each sensor maintains a local archive\nthat stores data on flash memory. Our storage architecture is fully\nstateless at each sensor from the perspective of the metadata index;\nall index structures are maintained at the resource-rich proxies, and\nonly direct requests or simple queries on explicitly identified\nstorage locations are sent to the sensors. Storage at the remote sensor\nis in effect treated as appendage of the proxy, resulting in low\nimplementation complexity, which makes it ideal for small,\nresourceconstrained sensor platforms. Further, the local store is optimized\nfor time-series access to archived data, as is typical in many\napplications. Each sensor periodically sends a summary of its data to a\nproxy. TSAR employs a novel adaptive summarization technique\nthat adapts the granularity of the data reported in each summary to\nthe ratio of false hits for application queries. More fine grain\nsummaries are sent whenever more false positives are observed, thereby\nbalancing the energy cost of metadata updates and false positives.\nThird, we have implemented a prototype of TSAR on a multi-tier\ntestbed comprising Stargate-based proxies and Mote-based sensors.\nOur implementation supports spatio-temporal, value, and\nrangebased queries on sensor data.\nFourth, we conduct a detailed experimental evaluation of TSAR\nusing a combination of EmStar/EmTOS [10] and our prototype.\nWhile our EmStar/EmTOS experiments focus on the scalability of\nTSAR in larger settings, our prototype evaluation involves latency\nand energy measurements in a real setting. Our results demonstrate\nthe logarithmic scaling property of the sparse skip graph and the\nlow latency of end-to-end queries in a duty-cycled multi-hop\nnetwork .\nThe remainder of this paper is structured as follows. Section 2\npresents key design issues that guide our work. Section 3 and 4\npresent the proxy-level index and the local archive and\nsummarization at a sensor, respectively. Section 5 discusses our prototype\nimplementation, and Section 6 presents our experimental results. We\npresent related work in Section 7 and our conclusions in Section 8.\n2. Design Considerations\nIn this section, we first describe the various components of a\nmulti-tier sensor network assumed in our work. We then present a\ndescription of the expected usage models for this system, followed\nby several principles addressing these factors which guide the\ndesign of our storage system.\n2.1 System Model\nWe envision a multi-tier sensor network comprising multiple tiers\n- a bottom tier of untethered remote sensor nodes, a middle tier of\ntethered sensor proxies, and an upper tier of applications and user\nterminals (see Figure 1).\nThe lowest tier is assumed to form a dense deployment of\nlowpower sensors. A canonical sensor node at this tier is equipped\nwith low-power sensors, a micro-controller, and a radio as well as\na significant amount of flash memory (e.g., 1GB). The common\nconstraint for this tier is energy, and the need for a long lifetime\nin spite of a finite energy constraint. The use of radio, processor,\nRAM, and the flash memory all consume energy, which needs to\nbe limited. In general, we assume radio communication to be\nsubstantially more expensive than accesses to flash memory.\nThe middle tier consists of power-rich sensor proxies that have\nsignificant computation, memory and storage resources and can use\n40\nTable 1: Characteristics of sensor storage systems\nSystem Data Index Reads Writes Order preserving\nCentralized store Centralized Centralized index Handled at store Send to store Yes\nLocal sensor store Fully distributed No index Flooding, diffusion Local No\nGHT/DCS [24] Fully distributed In-network index Hash to node Send to hashed node No\nTSAR/PRESTO Fully distributed Distributed index at proxies Proxy lookup + sensor query Local plus index update Yes\nUser\nUnified Logical Store\nQueries\n(time, space, value)\nQuery\nResponse\nCache\nQuery forwarding\nProxy\nRemote\nSensors\nLocal Data Archive\non Flash Memory\nInterval\nSkip Graph\nQuery\nforwarding\nsummaries\nstart index\nend index\nlinear\ntraversal\nQuery\nResponse\nCache-miss\ntriggered\nquery forwarding\nsummaries\nFigure 1: Architecture of a multi-tier sensor network.\nthese resources continuously. In urban environments, the proxy tier\nwould comprise a tethered base-station class nodes (e.g., Crossbow\nStargate), each with with multiple radios-an 802.11 radio that\nconnects it to a wireless mesh network and a low-power radio (e.g.\n802.15.4) that connects it to the sensor nodes. In remote sensing\napplications [10], this tier could comprise a similar Stargate node\nwith a solar power cell. Each proxy is assumed to manage several\ntens to hundreds of lower-tier sensors in its vicinity. A typical\nsensor network deployment will contain multiple geographically\ndistributed proxies. For instance, in a building monitoring application,\none sensor proxy might be placed per floor or hallway to monitor\ntemperature, heat and light sensors in their vicinity.\nAt the highest tier of our infrastructure are applications that query\nthe sensor network through a query interface[20]. In this work, we\nfocus on applications that require access to past sensor data. To\nsupport such queries, the system needs to archive data on a\npersistent store. Our goal is to design a storage system that exploits the\nrelative abundance of resources at proxies to mask the scarcity of\nresources at the sensors.\n2.2 Usage Models\nThe design of a storage system such as TSAR is affected by the\nqueries that are likely to be posed to it. A large fraction of queries\non sensor data can be expected to be spatio-temporal in nature.\nSensors provide information about the physical world; two key\nattributes of this information are when a particular event or activity\noccurred and where it occurred. Some instances of such queries\ninclude the time and location of target or intruder detections (e.g.,\nsecurity and monitoring applications), notifications of specific types\nof events such as pressure and humidity values exceeding a\nthreshold (e.g., industrial applications), or simple data collection queries\nwhich request data from a particular time or location (e.g., weather\nor environment monitoring).\nExpected queries of such data include those requesting ranges\nof one or more attributes; for instance, a query for all image data\nfrom cameras within a specified geographic area for a certain\nperiod of time. In addition, it is often desirable to support efficient\naccess to data in a way that maintains spatial and temporal\nordering. There are several ways of supporting range queries, such as\nlocality-preserving hashes such as are used in DIMS [18].\nHowever, the most straightforward mechanism, and one which naturally\nprovides efficient ordered access, is via the use of order-preserving\ndata structures. Order-preserving structures such as the well-known\nB-Tree maintain relationships between indexed values and thus\nallow natural access to ranges, as well as predecessor and successor\noperations on their key values.\nApplications may also pose value-based queries that involve\ndetermining if a value v was observed at any sensor; the query\nreturns a list of sensors and the times at which they observed this\nvalue. Variants of value queries involve restricting the query to a\ngeographical region, or specifying a range (v1, v2) rather than a\nsingle value v. Value queries can be handled by indexing on the\nvalues reported in the summaries. Specifically, if a sensor reports\na numerical value, then the index is constructed on these values. A\nsearch involves finding matching values that are either contained in\nthe search range (v1, v2) or match the search value v exactly.\nHybrid value and spatio-temporal queries are also possible. Such\nqueries specify a time interval, a value range and a spatial region\nand request all records that match these attributes - find all\ninstances where the temperature exceeded 100o\nF at location R\nduring the month of August. These queries require an index on both\ntime and value.\nIn TSAR our focus is on range queries on value or time, with\nplanned extensions to include spatial scoping.\n2.3 Design Principles\nOur design of a sensor storage system for multi-tier networks is\nbased on the following set of principles, which address the issues\narising from the system and usage models above.\n\u2022 Principle 1: Store locally, access globally: Current\ntechnology allows local storage to be significantly more\nenergyefficient than network communication, while technology\ntrends show no signs of erasing this gap in the near future.\nFor maximum network life a sensor storage system should\nleverage the flash memory on sensors to archive data locally,\nsubstituting cheap memory operations for expensive radio\ntransmission. But without efficient mechanisms for retrieval,\nthe energy gains of local storage may be outweighed by\ncommunication costs incurred by the application in searching for\ndata. We believe that if the data storage system provides\nthe abstraction of a single logical store to applications, as\n41\ndoes TSAR, then it will have additional flexibility to\noptimize communication and storage costs.\n\u2022 Principle 2: Distinguish data from metadata: Data must\nbe identified so that it may be retrieved by the application\nwithout exhaustive search. To do this, we associate\nmetadata with each data record - data fields of known syntax\nwhich serve as identifiers and may be queried by the storage\nsystem. Examples of this metadata are data attributes such as\nlocation and time, or selected or summarized data values. We\nleverage the presence of resource-rich proxies to index\nmetadata for resource-constrained sensors. The proxies share this\nmetadata index to provide a unified logical view of all data in\nthe system, thereby enabling efficient, low-latency lookups.\nSuch a tier-specific separation of data storage from metadata\nindexing enables the system to exploit the idiosyncrasies of\nmulti-tier networks, while improving performance and\nfunctionality.\n\u2022 Principle 3: Provide data-centric query support: In a sensor\napplication the specific location (i.e. offset) of a record in a\nstream is unlikely to be of significance, except if it conveys\ninformation concerning the location and/or time at which the\ninformation was generated. We thus expect that applications\nwill be best served by a query interface which allows them\nto locate data by value or attribute (e.g. location and time),\nrather than a read interface for unstructured data. This in turn\nimplies the need to maintain metadata in the form of an index\nthat provides low cost lookups.\n2.4 System Design\nTSAR embodies these design principles by employing local\nstorage at sensors and a distributed index at the proxies. The key\nfeatures of the system design are as follows:\nIn TSAR, writes occur at sensor nodes, and are assumed to\nconsist of both opaque data as well as application-specific metadata.\nThis metadata is a tuple of known types, which may be used by the\napplication to locate and identify data records, and which may be\nsearched on and compared by TSAR in the course of locating data\nfor the application. In a camera-based sensing application, for\ninstance, this metadata might include coordinates describing the field\nof view, average luminance, and motion values, in addition to basic\ninformation such as time and sensor location. Depending on the\napplication, this metadata may be two or three orders of magnitude\nsmaller than the data itself, for instance if the metadata consists of\nfeatures extracted from image or acoustic data.\nIn addition to storing data locally, each sensor periodically sends\na summary of reported metadata to a nearby proxy. The summary\ncontains information such as the sensor ID, the interval (t1, t2)\nover which the summary was generated, a handle identifying the\ncorresponding data record (e.g. its location in flash memory),\nand a coarse-grain representation of the metadata associated with\nthe record. The precise data representation used in the summary\nis application-specific; for instance, a temperature sensor might\nchoose to report the maximum and minimum temperature values\nobserved in an interval as a coarse-grain representation of the\nactual time series.\nThe proxy uses the summary to construct an index; the index\nis global in that it stores information from all sensors in the\nsystem and it is distributed across the various proxies in the system.\nThus, applications see a unified view of distributed data, and can\nquery the index at any proxy to get access to data stored at any\nsensor. Specifically, each query triggers lookups in this distributed\nindex and the list of matches is then used to retrieve the\ncorresponding data from the sensors. There are several distributed index and\nlookup methods which might be used in this system; however, the\nindex structure described in Section 3 is highly suited for the task.\nSince the index is constructed using a coarse-grain summary,\ninstead of the actual data, index lookups will yield approximate\nmatches. The TSAR summarization mechanism guarantees that\nindex lookups will never yield false negatives - i.e. it will never miss\nsummaries which include the value being searched for. However,\nindex lookups may yield false positives, where a summary matches\nthe query but when queried the remote sensor finds no matching\nvalue, wasting network resources. The more coarse-grained the\nsummary, the lower the update overhead and the greater the\nfraction of false positives, while finer summaries incur update overhead\nwhile reducing query overhead due to false positives. Remote\nsensors may easily distinguish false positives from queries which result\nin search hits, and calculate the ratio between the two; based on this\nratio, TSAR employs a novel adaptive technique that dynamically\nvaries the granularity of sensor summaries to balance the metadata\noverhead and the overhead of false positives.\n3. Data Structures\nAt the proxy tier, TSAR employs a novel index structure called\nthe Interval Skip Graph, which is an ordered, distributed data\nstructure for finding all intervals that contain a particular point or range\nof values. Interval skip graphs combine Interval Trees [5], an\ninterval-based binary search tree, with Skip Graphs [1], a ordered,\ndistributed data structure for peer-to-peer systems [13]. The\nresulting data structure has two properties that make it ideal for\nsensor networks. First, it has O(log n) search complexity for\naccessing the first interval that matches a particular value or range, and\nconstant complexity for accessing each successive interval.\nSecond, indexing of intervals rather than individual values makes the\ndata structure ideal for indexing summaries over time or value.\nSuch summary-based indexing is a more natural fit for\nenergyconstrained sensor nodes, since transmitting summaries incurs less\nenergy overhead than transmitting all sensor data.\nDefinitions: We assume that there are Np proxies and Ns\nsensors in a two-tier sensor network. Each proxy is responsible for\nmultiple sensor nodes, and no assumption is made about the\nnumber of sensors per proxy. Each sensor transmits interval summaries\nof data or events regularly to one or more proxies that it is\nassociated with, where interval i is represented as [lowi, highi]. These\nintervals can correspond to time or value ranges that are used for\nindexing sensor data. No assumption is made about the size of an\ninterval or about the amount of overlap between intervals.\nRange queries on the intervals are posed by users to the network\nof proxies and sensors; each query q needs to determine all index\nvalues that overlap the interval [lowq, highq]. The goal of the\ninterval skip graph is to index all intervals such that the set that overlaps\na query interval can be located efficiently. In the rest of this section,\nwe describe the interval skip graph in greater detail.\n3.1 Skip Graph Overview\nIn order to inform the description of the Interval Skip Graph, we\nfirst provide a brief overview of the Skip Graph data structure; for\na more extensive description the reader is referred to [1]. Figure 2\nshows a skip graph which indexes 8 keys; the keys may be seen\nalong the bottom, and above each key are the pointers associated\nwith that key. Each data element, consisting of a key and its\nassociated pointers, may reside on a different node in the network,\n42\n7 9 13 17 21 25 311\nlevel 0\nlevel 1\nlevel 2\nkey\nsingle skip graph element\n(each may be on different node)\nfind(21)\nnode-to-node messages\nFigure 2: Skip Graph of 8 Elements\n[6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5]\n5 14 14 16 23 23 27 30\n[low,high]\nmax\ncontains(13)\nmatch\nno\nmatch\nhalt\nFigure 3: Interval Skip Graph\n[6,14]\n[9,12]\n[14,16]\n[15,23]\n[18,19]\n[20,27]\n[21,30]\n[2,5]\nNode 1\nNode 2\nNode 3\nlevel 2\nlevel 1\nlevel 0\nFigure 4: Distributed Interval Skip Graph\nand pointers therefore identify both a remote node as well as a data\nelement on that node. In this figure we may see the following\nproperties of a skip graph:\n\u2022 Ordered index: The keys are members of an ordered data\ntype, for instance integers. Lookups make use of ordered\ncomparisons between the search key and existing index\nentries. In addition, the pointers at the lowest level point\ndirectly to the successor of each item in the index.\n\u2022 In-place indexing: Data elements remain on the nodes\nwhere they were inserted, and messages are sent between\nnodes to establish links between those elements and others\nin the index.\n\u2022 Log n height: There are log2 n pointers associated with each\nelement, where n is the number of data elements indexed.\nEach pointer belongs to a level l in [0... log2 n \u2212 1], and\ntogether with some other pointers at that level forms a chain\nof n/2l\nelements.\n\u2022 Probabilistic balance: Rather than relying on re-balancing\noperations which may be triggered at insert or delete, skip\ngraphs implement a simple random balancing mechanism\nwhich maintains close to perfect balance on average, with\nan extremely low probability of significant imbalance.\n\u2022 Redundancy and resiliency: Each data element forms an\nindependent search tree root, so searches may begin at any\nnode in the network, eliminating hot spots at a single search\nroot. In addition the index is resilient against node failure;\ndata on the failed node will not be accessible, but remaining\ndata elements will be accessible through search trees rooted\non other nodes.\nIn Figure 2 we see the process of searching for a particular value\nin a skip graph. The pointers reachable from a single data element\nform a binary tree: a pointer traversal at the highest level skips over\nn/2 elements, n/4 at the next level, and so on. Search consists\nof descending the tree from the highest level to level 0, at each\nlevel comparing the target key with the next element at that level\nand deciding whether or not to traverse. In the perfectly balanced\ncase shown here there are log2 n levels of pointers, and search will\ntraverse 0 or 1 pointers at each level. We assume that each data\nelement resides on a different node, and measure search cost by the\nnumber messages sent (i.e. the number of pointers traversed); this\nwill clearly be O(log n).\nTree update proceeds from the bottom, as in a B-Tree, with the\nroot(s) being promoted in level as the tree grows. In this way, for\ninstance, the two chains at level 1 always contain n/2 entries each,\nand there is never a need to split chains as the structure grows. The\nupdate process then consists of choosing which of the 2l\nchains to\ninsert an element into at each level l, and inserting it in the proper\nplace in each chain.\nMaintaining a perfectly balanced skip graph as shown in\nFigure 2 would be quite complex; instead, the probabilistic balancing\nmethod introduced in Skip Lists [23] is used, which trades off a\nsmall amount of overhead in the expected case in return for simple\nupdate and deletion. The basis for this method is the observation\nthat any element which belongs to a particular chain at level l can\nonly belong to one of two chains at level l+1. To insert an element\nwe ascend levels starting at 0, randomly choosing one of the two\npossible chains at each level, an stopping when we reach an empty\nchain.\nOne means of implementation (e.g. as described in [1]) is to\nassign each element an arbitrarily long random bit string. Each\nchain at level l is then constructed from those elements whose bit\nstrings match in the first l bits, thus creating 2l\npossible chains\nat each level and ensuring that each chain splits into exactly two\nchains at the next level. Although the resulting structure is not\nperfectly balanced, following the analysis in [23] we can show that\nthe probability of it being significantly out of balance is extremely\nsmall; in addition, since the structure is determined by the random\nnumber stream, input data patterns cannot cause the tree to become\nimbalanced.\n3.2 Interval Skip Graph\nA skip graph is designed to store single-valued entries. In this\nsection, we introduce a novel data structure that extends skip graphs\nto store intervals [lowi, highi] and allows efficient searches for all\nintervals covering a value v, i.e. {i : lowi \u2264 v \u2264 highi}. Our data\nstructure can be extended to range searches in a straightforward\nmanner.\nThe interval skip graph is constructed by applying the method of\naugmented search trees, as described by Cormen, Leiserson, and\nRivest [5] and applied to binary search trees to create an Interval\nTree. The method is based on the observation that a search structure\nbased on comparison of ordered keys, such as a binary tree, may\nalso be used to search on a secondary key which is non-decreasing\nin the first key.\nGiven a set of intervals sorted by lower bound - lowi \u2264\nlowi+1 - we define the secondary key as the cumulative maximum,\nmaxi = maxk=0...i (highk). The set of intervals intersecting a\nvalue v may then be found by searching for the first interval (and\nthus the interval with least lowi) such that maxi \u2265 v. We then\n43\ntraverse intervals in increasing order lower bound, until we find the\nfirst interval with lowi > v, selecting those intervals which\nintersect v.\nUsing this approach we augment the skip graph data structure, as\nshown in Figure 3, so that each entry stores a range (lower bound\nand upper bound) and a secondary key (cumulative maximum of\nupper bound). To efficiently calculate the secondary key maxi for\nan entry i, we take the greatest of highi and the maximum values\nreported by each of i\"s left-hand neighbors.\nTo search for those intervals containing the value v, we first\nsearch for v on the secondary index, maxi, and locate the first entry\nwith maxi \u2265 v. (by the definition of maxi, for this data element\nmaxi = highi.) If lowi > v, then this interval does not contain\nv, and no other intervals will, either, so we are done. Otherwise we\ntraverse the index in increasing order of mini, returning matching\nintervals, until we reach an entry with mini > v and we are done.\nSearches for all intervals which overlap a query range, or which\ncompletely contain a query range, are straightforward extensions\nof this mechanism.\nLookup Complexity: Lookup for the first interval that matches\na given value is performed in a manner very similar to an interval\ntree. The complexity of search is O(log n). The number of\nintervals that match a range query can vary depending on the amount of\noverlap in the intervals being indexed, as well as the range specified\nin the query.\nInsert Complexity: In an interval tree or interval skip list, the\nmaximum value for an entry need only be calculated over the\nsubtree rooted at that entry, as this value will be examined only when\nsearching within the subtree rooted at that entry. For a simple\ninterval skip graph, however, this maximum value for an entry must be\ncomputed over all entries preceding it in the index, as searches may\nbegin anywhere in the data structure, rather than at a distinguished\nroot element. It may be easily seen that in the worse case the\ninsertion of a single interval (one that covers all existing intervals in\nthe index) will trigger the update of all entries in the index, for a\nworst-case insertion cost of O(n).\n3.3 Sparse Interval Skip Graph\nThe final extensions we propose take advantage of the\ndifference between the number of items indexed in a skip graph and the\nnumber of systems on which these items are distributed. The cost\nin network messages of an operation may be reduced by\narranging the data structure so that most structure traversals occur locally\non a single node, and thus incur zero network cost. In addition,\nsince both congestion and failure occur on a per-node basis, we\nmay eliminate links without adverse consequences if those links\nonly contribute to load distribution and/or resiliency within a\nsingle node. These two modifications allow us to achieve reductions\nin asymptotic complexity of both update and search.\nAs may be in Section 3.2, insert and delete cost on an\ninterval skip graph has a worst case complexity of O(n), compared to\nO(log n) for an interval tree. The main reason for the difference\nis that skip graphs have a full search structure rooted at each\nelement, in order to distribute load and provide resilience to system\nfailures in a distributed setting. However, in order to provide load\ndistribution and failure resilience it is only necessary to provide a\nfull search structure for each system. If as in TSAR the number\nof nodes (proxies) is much smaller than the number of data\nelements (data summaries indexed), then this will result in significant\nsavings.\nImplementation: To construct a sparse interval skip graph, we\nensure that there is a single distinguished element on each system,\nthe root element for that system; all searches will start at one of\nthese root elements. When adding a new element, rather than\nsplitting lists at increasing levels l until the element is in a list with no\nothers, we stop when we find that the element would be in a list\ncontaining no root elements, thus ensuring that the element is reachable\nfrom all root elements. An example of applying this optimization\nmay be seen in Figure 5. (In practice, rather than designating\nexisting data elements as roots, as shown, it may be preferable to insert\nnull values at startup.)\nWhen using the technique of membership vectors as in [1], this\nmay be done by broadcasting the membership vectors of each root\nelement to all other systems, and stopping insertion of an element\nat level l when it does not share an l-bit prefix with any of the Np\nroot elements. The expected number of roots sharing a log2Np-bit\nprefix is 1, giving an expected expected height for each element of\nlog2Np +O(1). An alternate implementation, which distributes\ninformation concerning root elements at pointer establishment time,\nis omitted due to space constraints; this method eliminates the need\nfor additional messages.\nPerformance: In a (non-interval) sparse skip graph, since the\nexpected height of an inserted element is now log2 Np + O(1),\nexpected insertion complexity is O(log Np), rather than O(log n),\nwhere Np is the number of root elements and thus the number of\nseparate systems in the network. (In the degenerate case of a\nsingle system we have a skip list; with splitting probability 0.5 the\nexpected height of an individual element is 1.) Note that since\nsearches are started at root elements of expected height log2 n,\nsearch complexity is not improved.\nFor an interval sparse skip graph, update performance is\nimproved considerably compared to the O(n) worst case for the\nnonsparse case. In an augmented search structure such as this, an\nelement only stores information for nodes which may be reached from\nthat element-e.g. the subtree rooted at that element, in the case of\na tree. Thus, when updating the maximum value in an interval tree,\nthe update is only propagated towards the root. In a sparse interval\nskip graph, updates to a node only propagate towards the Np root\nelements, for a worst-case cost of Np log2 n.\nShortcut search: When beginning a search for a value v, rather\nthan beginning at the root on that proxy, we can find the element\nthat is closest to v (e.g. using a secondary local index), and then\nbegin the search at that element. The expected distance between\nthis element and the search terminus is log2 Np, and the search\nwill now take on average log2 Np + O(1) steps. To illustrate this\noptimization, in Figure 4 depending on the choice of search root, a\nsearch for [21, 30] beginning at node 2 may take 3 network hops,\ntraversing to node 1, then back to node 2, and finally to node 3\nwhere the destination is located, for a cost of 3 messages. The\nshortcut search, however, locates the intermediate data element on\nnode 2, and then proceeds directly to node 3 for a cost of 1 message.\nPerformance: This technique may be applied to the primary key\nsearch which is the first of two insertion steps in an interval skip\ngraph. By combining the short-cut optimization with sparse\ninterval skip graphs, the expected cost of insertion is now O(log Np),\nindependent of the size of the index or the degree of overlap of the\ninserted intervals.\n3.4 Alternative Data Structures\nThus far we have only compared the sparse interval skip graph\nwith similar structures from which it is derived. A comparison with\nseveral other data structures which meet at least some of the\nrequirements for the TSAR index is shown in Table 2.\n44\nTable 2: Comparison of Distributed Index Structures\nRange Query Support Interval Representation Re-balancing Resilience Small Networks Large Networks\nDHT, GHT no no no yes good good\nLocal index, flood query yes yes no yes good bad\nP-tree, RP* (distributed B-Trees) yes possible yes no good good\nDIMS yes no yes yes yes yes\nInterval Skipgraph yes yes no yes good good\n[6,14] [9,12] [14,16] [15,23] [18,19] [20,27] [21,30][2,5]\nRoots Node 1\nNode 2\nFigure 5: Sparse Interval Skip Graph\nThe hash-based systems, DHT [25] and GHT [26], lack the\nability to perform range queries and are thus not well-suited to indexing\nspatio-temporal data. Indexing locally using an appropriate\nsinglenode structure and then flooding queries to all proxies is a\ncompetitive alternative for small networks; for large networks the linear\ndependence on the number of proxies becomes an issue. Two\ndistributed B-Trees were examined - P-Trees [6] and RP* [19]. Each\nof these supports range queries, and in theory could be modified\nto support indexing of intervals; however, they both require\ncomplex re-balancing, and do not provide the resilience characteristics\nof the other structures. DIMS [18] provides the ability to perform\nspatio-temporal range queries, and has the necessary resilience to\nfailures; however, it cannot be used index intervals, which are used\nby TSAR\"s data summarization algorithm.\n4. Data Storage and Summarization\nHaving described the proxy-level index structure, we turn to the\nmechanisms at the sensor tier. TSAR implements two key\nmechanisms at the sensor tier. The first is a local archival store at each\nsensor node that is optimized for resource-constrained devices. The\nsecond is an adaptive summarization technique that enables each\nsensor to adapt to changing data and query characteristics. The rest\nof this section describes these mechanisms in detail.\n4.1 Local Storage at Sensors\nInterval skip graphs provide an efficient mechanism to lookup\nsensor nodes containing data relevant to a query. These queries are\nthen routed to the sensors, which locate the relevant data records\nin the local archive and respond back to the proxy. To enable such\nlookups, each sensor node in TSAR maintains an archival store of\nsensor data. While the implementation of such an archival store\nis straightforward on resource-rich devices that can run a database,\nsensors are often power and resource-constrained. Consequently,\nthe sensor archiving subsystem in TSAR is explicitly designed to\nexploit characteristics of sensor data in a resource-constrained\nsetting.\nTimestamp\nCalibration\nParameters\nOpaque DataData/Event Attributes size\nFigure 6: Single storage record\nSensor data has very distinct characteristics that inform our\ndesign of the TSAR archival store. Sensors produce time-series data\nstreams, and therefore, temporal ordering of data is a natural and\nsimple way of storing archived sensor data. In addition to\nsimplicity, a temporally ordered store is often suitable for many sensor data\nprocessing tasks since they involve time-series data processing.\nExamples include signal processing operations such as FFT, wavelet\ntransforms, clustering, similarity matching, and target detection.\nConsequently, the local archival store is a collection of records,\ndesigned as an append-only circular buffer, where new records are\nappended to the tail of the buffer. The format of each data record is\nshown in Figure 6. Each record has a metadata field which includes\na timestamp, sensor settings, calibration parameters, etc. Raw\nsensor data is stored in the data field of the record. The data field\nis opaque and application-specific-the storage system does not\nknow or care about interpreting this field. A camera-based sensor,\nfor instance, may store binary images in this data field. In order\nto support a variety of applications, TSAR supports variable-length\ndata fields; as a result, record sizes can vary from one record to\nanother.\nOur archival store supports three operations on records: create,\nread, and delete. Due to the append-only nature of the store,\ncreation of records is simple and efficient. The create operation simply\ncreates a new record and appends it to the tail of the store. Since\nrecords are always written at the tail, the store need not maintain\na free space list. All fields of the record need to be specified at\ncreation time; thus, the size of the record is known a priori and the\nstore simply allocates the the corresponding number of bytes at the\ntail to store the record. Since writes are immutable, the size of a\nrecord does not change once it is created.\nproxy\nproxy\nproxy\nrecord\n3 record\nsummary\nlocal archive in\nflash memory\ndata summary\nstart,end offset\ntime interval\nsensor\nsummary\nsent to proxy\nInsert summaries\ninto interval skip graph\nFigure 7: Sensor Summarization\n45\nThe read operation enables stored records to be retrieved in\norder to answer queries. In a traditional database system, efficient\nlookups are enabled by maintaining a structure such as a B-tree that\nindexes certain keys of the records. However, this can be quite\ncomplex for a small sensor node with limited resources. Consequently,\nTSAR sensors do not maintain any index for the data stored in their\narchive. Instead, they rely on the proxies to maintain this metadata\nindex-sensors periodically send the proxy information\nsummarizing the data contained in a contiguous sequence of records, as well\nas a handle indicating the location of these records in flash memory.\nThe mechanism works as follows: In addition to the summary\nof sensor data, each node sends metadata to the proxy containing\nthe time interval corresponding to the summary, as well as the start\nand end offsets of the flash memory location where the raw data\ncorresponding is stored (as shown in Figure 7). Thus, random\naccess is enabled at granularity of a summary-the start offset of each\nchunk of records represented by a summary is known to the proxy.\nWithin this collection, records are accessed sequentially. When a\nquery matches a summary in the index, the sensor uses these offsets\nto access the relevant records on its local flash by sequentially\nreading data from the start address until the end address. Any\nqueryspecific operation can then be performed on this data. Thus, no\nindex needs to be maintained at the sensor, in line with our goal\nof simplifying sensor state management. The state of the archive\nis captured in the metadata associated with the summaries, and is\nstored and maintained at the proxy.\nWhile we anticipate local storage capacity to be large, eventually\nthere might be a need to overwrite older data, especially in high\ndata rate applications. This may be done via techniques such as\nmulti-resolution storage of data [9], or just simply by overwriting\nolder data. When older data is overwritten, a delete operation is\nperformed, where an index entry is deleted from the interval skip\ngraph at the proxy and the corresponding storage space in flash\nmemory at the sensor is freed.\n4.2 Adaptive Summarization\nThe data summaries serve as glue between the storage at the\nremote sensor and the index at the proxy. Each update from a sensor\nto the proxy includes three pieces of information: the summary, a\ntime period corresponding to the summary, and the start and end\noffsets for the flash archive. In general, the proxy can index the\ntime interval representing a summary or the value range reported\nin the summary (or both). The former index enables quick lookups\non all records seen during a certain interval, while the latter index\nenables quick lookups on all records matching a certain value.\nAs described in Section 2.4, there is a trade-off between the\nenergy used in sending summaries (and thus the frequency and\nresolution of those summaries) and the cost of false hits during queries.\nThe coarser and less frequent the summary information, the less\nenergy required, while false query hits in turn waste energy on\nrequests for non-existent data.\nTSAR employs an adaptive summarization technique that\nbalances the cost of sending updates against the cost of false positives.\nThe key intuition is that each sensor can independently identify the\nfraction of false hits and true hits for queries that access its local\narchive. If most queries result in true hits, then the sensor\ndetermines that the summary can be coarsened further to reduce update\ncosts without adversely impacting the hit ratio. If many queries\nresult in false hits, then the sensor makes the granularity of each\nsummary finer to reduce the number and overhead of false hits.\nThe resolution of the summary depends on two\nparametersthe interval over which summaries of the data are constructed and\ntransmitted to the proxy, as well as the size of the\napplicationspecific summary. Our focus in this paper is on the interval over\nwhich the summary is constructed. Changing the size of the data\nsummary can be performed in an application-specific manner (e.g.\nusing wavelet compression techniques as in [9]) and is beyond the\nscope of this paper. Currently, TSAR employs a simple\nsummarization scheme that computes the ratio of false and true hits and\ndecreases (increases) the interval between summaries whenever this\nratio increases (decreases) beyond a threshold.\n5. TSAR Implementation\nWe have implemented a prototype of TSAR on a multi-tier\nsensor network testbed. Our prototype employs Crossbow Stargate\nnodes to implement the proxy tier. Each Stargate node employs a\n400MHz Intel XScale processor with 64MB RAM and runs the\nLinux 2.4.19 kernel and EmStar release 2.1. The proxy nodes\nare equipped with two wireless radios, a Cisco Aironet 340-based\n802.11b radio and a hostmote bridge to the Mica2 sensor nodes\nusing the EmStar transceiver. The 802.11b wireless network is\nused for inter-proxy communication within the proxy tier, while\nthe wireless bridge enables sensor-proxy communication. The\nsensor tier consists of Crossbow Mica2s and Mica2dots, each\nconsisting of a 915MHz CC1000 radio, a BMAC protocol stack, a 4 Mb\non-board flash memory and an ATMega 128L processor. The\nsensor nodes run TinyOS 1.1.8. In addition to the on-board flash, the\nsensor nodes can be equipped with external MMC/SD flash cards\nusing a custom connector. The proxy nodes can be equipped with\nexternal storage such as high-capacity compact flash (up to 4GB),\n6GB micro-drives, or up to 60GB 1.8inch mobile disk drives.\nSince sensor nodes may be several hops away from the nearest\nproxy, the sensor tier employs multi-hop routing to communicate\nwith the proxy tier. In addition, to reduce the power consumption\nof the radio while still making the sensor node available for queries,\nlow power listening is enabled, in which the radio receiver is\nperiodically powered up for a short interval to sense the channel for\ntransmissions, and the packet preamble is extended to account for\nthe latency until the next interval when the receiving radio wakes\nup. Our prototype employs the MultiHopLEPSM routing protocol\nwith the BMAC layer configured in the low-power mode with a\n11% duty cycle (one of the default BMAC [22] parameters)\nOur TSAR implementation on the Mote involves a data\ngathering task that periodically obtains sensor readings and logs these\nreading to flash memory. The flash memory is assumed to be a\ncircular append-only store and the format of the logged data is\ndepicted in Figure 6. The Mote sends a report to the proxy every N\nreadings, summarizing the observed data. The report contains: (i)\nthe address of the Mote, (ii) a handle that contains an offset and the\nlength of the region in flash memory containing data referred to by\nthe summary, (iii) an interval (t1, t2) over which this report is\ngenerated, (iv) a tuple (low, high) representing the minimum and the\nmaximum values observed at the sensor in the interval, and (v) a\nsequence number. The sensor updates are used to construct a sparse\ninterval skip graph that is distributed across proxies, via network\nmessages between proxies over the 802.11b wireless network.\nOur current implementation supports queries that request records\nmatching a time interval (t1, t2) or a value range (v1, v2). Spatial\nconstraints are specified using sensor IDs. Given a list of matching\nintervals from the skip graph, TSAR supports two types of\nmessages to query the sensor: lookup and fetch. A lookup message\ntriggers a search within the corresponding region in flash memory\nand returns the number of matching records in that memory region\n(but does not retrieve data). In contrast, a fetch message not only\n46\n0\n10\n20\n30\n40\n50\n60\n70\n80\n128512 1024 2048 4096\nNumberofMessages\nIndex size (entries)\nInsert (skipgraph)\nInsert (sparse skipgraph)\nInitial lookup\n(a) James Reserve Data\n0\n10\n20\n30\n40\n50\n60\n70\n80\n512 1024 2048 4096\nNumberofMessages\nIndex size (entries)\nInsert (skipgraph)\nInsert (sparse skipgraph)\nInitial lookup\n(b) Synthetic Data\nFigure 8: Skip Graph Insert Performance\ntriggers a search but also returns all matching data records to the\nproxy. Lookup messages are useful for polling a sensor, for\ninstance, to determine if a query matches too many records.\n6. Experimental Evaluation\nIn this section, we evaluate the efficacy of TSAR using our\nprototype and simulations. The testbed for our experiments consists\nof four Stargate proxies and twelve Mica2 and Mica2dot sensors;\nthree sensors each are assigned to each proxy. Given the limited\nsize of our testbed, we employ simulations to evaluate the\nbehavior of TSAR in larger settings. Our simulation employs the EmTOS\nemulator [10], which enables us to run the same code in simulation\nand the hardware platform.\nRather than using live data from a real sensor, to ensure\nrepeatable experiments, we seed each sensor node with a dataset\n(i.e., a trace) that dictates the values reported by that node to the\nproxy. One section of the flash memory on each sensor node is\nprogrammed with data points from the trace; these observations\nare then replayed during an experiment, logged to the local archive\n(located in flash memory, as well), and reported to the proxy. The\nfirst dataset used to evaluate TSAR is a temperature dataset from\nJames Reserve [27] that includes data from eleven temperature\nsensor nodes over a period of 34 days. The second dataset is\nsynthetically generated; the trace for each sensor is generated using a\nuniformly distributed random walk though the value space.\nOur experimental evaluation has four parts. First, we run\nEmTOS simulations to evaluate the lookup, update and delete overhead\nfor sparse interval skip graphs using the real and synthetic datasets.\nSecond, we provide summary results from micro-benchmarks of\nthe storage component of TSAR, which include empirical\ncharacterization of the energy costs and latency of reads and writes for the\nflash memory chip as well as the whole mote platform, and\ncomparisons to published numbers for other storage and\ncommunication technologies. These micro-benchmarks form the basis for our\nfull-scale evaluation of TSAR on a testbed of four Stargate proxies\nand twelve Motes. We measure the end-to-end query latency in our\nmulti-hop testbed as well as the query processing overhead at the\nmote tier. Finally, we demonstrate the adaptive summarization\ncapability at each sensor node. The remainder of this section presents\nour experimental results.\n6.1 Sparse Interval Skip Graph Performance\nThis section evaluates the performance of sparse interval skip\ngraphs by quantifying insert, lookup and delete overheads.\nWe assume a proxy tier with 32 proxies and construct sparse\ninterval skip graphs of various sizes using our datasets. For each skip\n0\n5\n10\n15\n20\n25\n30\n35\n409620481024512\nNumberofMessages\nIndex size (entries)\nInitial lookup\nTraversal\n(a) James Reserve Data\n0\n2\n4\n6\n8\n10\n12\n14\n409620481024512\nNumberofMessages\nIndex size (entries)\nInitial lookup\nTraversal\n(b) Synthetic Data\nFigure 9: Skip Graph Lookup Performance\n0\n10\n20\n30\n40\n50\n60\n70\n1 4 8 16 24 32 48\nNumberofmessages\nNumber of proxies\nSkipgraph insert\nSparse skipgraph insert\nInitial lookup\n(a) Impact of Number of\nProxies\n0\n20\n40\n60\n80\n100\n120\n512 1024 2048 4096\nNumberofMessages\nIndex size (entries)\nInsert (redundant)\nInsert (non-redundant)\nLookup (redundant)\nLookup (non-redundant)\n(b) Impact of Redundant\nSummaries\nFigure 10: Skip Graph Overheads\ngraph, we evaluate the cost of inserting a new value into the index.\nEach entry was deleted after its insertion, enabling us to quantify\nthe delete overhead as well. Figure 8(a) and (b) quantify the insert\noverhead for our two datasets: each insert entails an initial traversal\nthat incurs log n messages, followed by neighbor pointer update at\nincreasing levels, incurring a cost of 4 log n messages. Our results\ndemonstrate this behavior, and show as well that performance of\ndelete-which also involves an initial traversal followed by pointer\nupdates at each level-incurs a similar cost.\nNext, we evaluate the lookup performance of the index\nstructure. Again, we construct skip graphs of various sizes using our\ndatasets and evaluate the cost of a lookup on the index structure.\nFigures 9(a) and (b) depict our results. There are two components\nfor each lookup-the lookup of the first interval that matches the\nquery and, in the case of overlapping intervals, the subsequent\nlinear traversal to identify all matching intervals. The initial lookup\ncan be seen to takes log n messages, as expected. The costs of\nthe subsequent linear traversal, however, are highly data dependent.\nFor instance, temperature values for the James Reserve data exhibit\nsignificant spatial correlations, resulting in significant overlap\nbetween different intervals and variable, high traversal cost (see\nFigure 9(a)). The synthetic data, however, has less overlap and incurs\nlower traversal overhead as shown in Figure 9(b).\nSince the previous experiments assumed 32 proxies, we evaluate\nthe impact of the number of proxies on skip graph performance. We\nvary the number of proxies from 10 to 48 and distribute a skip graph\nwith 4096 entries among these proxies. We construct regular\ninterval skip graphs as well as sparse interval skip graphs using these\nentries and measure the overhead of inserts and lookups. Thus, the\nexperiment also seeks to demonstrate the benefits of sparse skip\ngraphs over regular skip graphs. Figure 10(a) depicts our results.\nIn regular skip graphs, the complexity of insert is O(log2n) in the\n47\nexpected case (and O(n) in the worst case) where n is the number\nof elements. This complexity is unaffected by changing the\nnumber of proxies, as indicated by the flat line in the figure. Sparse\nskip graphs require fewer pointer updates; however, their overhead\nis dependent on the number of proxies, and is O(log2Np) in the\nexpected case, independent of n. This can be seen to result in\nsignificant reduction in overhead when the number of proxies is small,\nwhich decreases as the number of proxies increases.\nFailure handling is an important issue in a multi-tier sensor\narchitecture since it relies on many components-proxies, sensor nodes\nand routing nodes can fail, and wireless links can fade. Handling\nof many of these failure modes is outside the scope of this\npaper; however, we consider the case of resilience of skip graphs\nto proxy failures. In this case, skip graph search (and subsequent\nrepair operations) can follow any one of the other links from a\nroot element. Since a sparse skip graph has search trees rooted\nat each node, searching can then resume once the lookup request\nhas routed around the failure. Together, these two properties\nensure that even if a proxy fails, the remaining entries in the skip\ngraph will be reachable with high probability-only the entries on\nthe failed proxy and the corresponding data at the sensors becomes\ninaccessible.\nTo ensure that all data on sensors remains accessible, even in the\nevent of failure of a proxy holding index entries for that data, we\nincorporate redundant index entries. TSAR employs a simple\nredundancy scheme where additional coarse-grain summaries are used\nto protect regular summaries. Each sensor sends summary data\nperiodically to its local proxy, but less frequently sends a\nlowerresolution summary to a backup proxy-the backup summary\nrepresents all of the data represented by the finer-grained summaries,\nbut in a lossier fashion, thus resulting in higher read overhead (due\nto false hits) if the backup summary is used. The cost of\nimplementing this in our system is low - Figure 10(b) shows the overhead of\nsuch a redundancy scheme, where a single coarse summary is send\nto a backup for every two summaries sent to the primary proxy.\nSince a redundant summary is sent for every two summaries, the\ninsert cost is 1.5 times the cost in the normal case. However, these\nredundant entries result in only a negligible increase in lookup\noverhead, due the logarithmic dependence of lookup cost on the index\nsize, while providing full resilience to any single proxy failure.\n6.2 Storage Microbenchmarks\nSince sensors are resource-constrained, the energy consumption\nand the latency at this tier are important measures for evaluating the\nperformance of a storage architecture. Before performing an\nendto-end evaluation of our system, we provide more detailed\ninformation on the energy consumption of the storage component used\nto implement the TSAR local archive, based on empirical\nmeasurements. In addition we compare these figures to those for other\nlocal storage technologies, as well as to the energy consumption of\nwireless communication, using information from the literature. For\nempirical measurements we measure energy usage for the storage\ncomponent itself (i.e. current drawn by the flash chip), as well as\nfor the entire Mica2 mote.\nThe power measurements in Table 3 were performed for the\nAT45DB041 [15] flash memory on a Mica2 mote, which is an older\nNOR flash device. The most promising technology for low-energy\nstorage on sensing devices is NAND flash, such as the Samsung\nK9K4G08U0M device [16]; published power numbers for this\ndevice are provided in the table. Published energy requirements for\nwireless transmission using the Chipcon [4] CC2420 radio (used\nin MicaZ and Telos motes) are provided for comparison, assuming\nEnergy Energy/byte\nMote flash\nRead 256 byte page 58\u00b5J* /\n136\u00b5J* total\n0.23\u00b5J*\nWrite 256 byte page 926\u00b5J* /\n1042\u00b5J* total\n3.6\u00b5J*\nNAND Flash\nRead 512 byte page 2.7\u00b5J 1.8nJ\nWrite 512 byte page 7.8\u00b5J 15nJ\nErase 16K byte sector 60\u00b5J 3.7nJ\nCC2420 radio\nTransmit 8 bits\n(-25dBm)\n0.8\u00b5J 0.8\u00b5J\nReceive 8 bits 1.9\u00b5J 1.9\u00b5J\nMote AVR processor\nIn-memory search,\n256 bytes\n1.8\u00b5J 6.9nJ\nTable 3: Storage and Communication Energy Costs (*measured\nvalues)\n0\n200\n400\n600\n800\n1000\n1 2 3\nLatency(ms)\nNumber of hops\n(a) Multi-hop query\nperformance\n0\n100\n200\n300\n400\n500\n1 5121024 2048 4096\nLatency(ms)\nIndex size (entries)\nSensor communication\nProxy communication\nSensor lookup, processing\n(b) Query Performance\nFigure 11: Query Processing Latency\nzero network and protocol overhead. Comparing the total energy\ncost for writing flash (erase + write) to the total cost for\ncommunication (transmit + receive), we find that the NAND flash is almost\n150 times more efficient than radio communication, even assuming\nperfect network protocols.\n6.3 Prototype Evaluation\nThis section reports results from an end-to-end evaluation of the\nTSAR prototype involving both tiers. In our setup, there are four\nproxies connected via 802.11 links and three sensors per proxy. The\nmulti-hop topology was preconfigured such that sensor nodes were\nconnected in a line to each proxy, forming a minimal tree of depth\n0\n400\n800\n1200\n1600\n0 20 40 60 80 100 120 140 160\nRetrievallatency(ms)\nArchived data retrieved (bytes)\n(a) Data Query and Fetch\nTime\n0\n2\n4\n6\n8\n10\n12 4 8 16 32\nLatency(ms)\nNumber of 34-byte records searched\n(b) Sensor query\nprocessing delay\nFigure 12: Query Latency Components\n48\n3. Due to resource constraints we were unable to perform\nexperiments with dozens of sensor nodes, however this topology ensured\nthat the network diameter was as large as for a typical network of\nsignificantly larger size.\nOur evaluation metric is the end-to-end latency of query\nprocessing. A query posed on TSAR first incurs the latency of a sparse\nskip graph lookup, followed by routing to the appropriate sensor\nnode(s). The sensor node reads the required page(s) from its local\narchive, processes the query on the page that is read, and transmits\nthe response to the proxy, which then forwards it to the user. We\nfirst measure query latency for different sensors in our multi-hop\ntopology. Depending on which of the sensors is queried, the total\nlatency increases almost linearly from about 400ms to 1 second, as\nthe number of hops increases from 1 to 3 (see Figure 11(a)).\nFigure 11(b) provides a breakdown of the various components\nof the end-to-end latency. The dominant component of the total\nlatency is the communication over one or more hops. The\ntypical time to communicate over one hop is approximately 300ms.\nThis large latency is primarily due to the use of a duty-cycled MAC\nlayer; the latency will be larger if the duty cycle is reduced (e.g.\nthe 2% setting as opposed to the 11.5% setting used in this\nexperiment), and will conversely decrease if the duty cycle is increased.\nThe figure also shows the latency for varying index sizes; as\nexpected, the latency of inter-proxy communication and skip graph\nlookups increases logarithmically with index size. Not surprisingly,\nthe overhead seen at the sensor is independent of the index size.\nThe latency also depends on the number of packets transmitted\nin response to a query-the larger the amount of data retrieved by a\nquery, the greater the latency. This result is shown in Figure 12(a).\nThe step function is due to packetization in TinyOS; TinyOS sends\none packet so long as the payload is smaller than 30 bytes and splits\nthe response into multiple packets for larger payloads. As the data\nretrieved by a query is increased, the latency increases in steps,\nwhere each step denotes the overhead of an additional packet.\nFinally, Figure 12(b) shows the impact of searching and\nprocessing flash memory regions of increasing sizes on a sensor. Each\nsummary represents a collection of records in flash memory, and\nall of these records need to be retrieved and processed if that\nsummary matches a query. The coarser the summary, the larger the\nmemory region that needs to be accessed. For the search sizes\nexamined, amortization of overhead when searching multiple flash\npages and archival records, as well as within the flash chip and its\nassociated driver, results in the appearance of sub-linear increase\nin latency with search size. In addition, the operation can be seen\nto have very low latency, in part due to the simplicity of our query\nprocessing, requiring only a compare operation with each stored\nelement. More complex operations, however, will of course incur\ngreater latency.\n6.4 Adaptive Summarization\nWhen data is summarized by the sensor before being reported\nto the proxy, information is lost. With the interval summarization\nmethod we are using, this information loss will never cause the\nproxy to believe that a sensor node does not hold a value which it in\nfact does, as all archived values will be contained within the interval\nreported. However, it does cause the proxy to believe that the sensor\nmay hold values which it does not, and forward query messages to\nthe sensor for these values. These false positives constitute the cost\nof the summarization mechanism, and need to be balanced against\nthe savings achieved by reducing the number of reports. The goal\nof adaptive summarization is to dynamically vary the summary size\nso that these two costs are balanced.\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0 5 10 15 20 25 30\nFractionoftruehits\nSummary size (number of records)\n(a) Impact of summary\nsize\n0\n5\n10\n15\n20\n25\n30\n35\n0 5000 10000 15000 20000 25000 30000\nSummarizationsize(num.records)\nNormalized time (units)\nquery rate 0.2\nquery rate 0.03\nquery rate 0.1\n(b) Adaptation to query\nrate\nFigure 13: Impact of Summarization Granularity\nFigure 13(a) demonstrates the impact of summary granularity\non false hits. As the number of records included in a summary\nis increased, the fraction of queries forwarded to the sensor which\nmatch data held on that sensor (true positives) decreases. Next,\nin Figure 13(b) we run the a EmTOS simulation with our\nadaptive summarization algorithm enabled. The adaptive algorithm\nincreases the summary granularity (defined as the number of records\nper summary) when Cost(updates)\nCost(falsehits)\n> 1 + and reduces it if\nCost(updates)\nCost(falsehits)\n> 1 \u2212 , where is a small constant. To\ndemonstrate the adaptive nature of our technique, we plot a time series\nof the summarization granularity. We begin with a query rate of 1\nquery per 5 samples, decrease it to 1 every 30 samples, and then\nincrease it again to 1 query every 10 samples. As shown in\nFigure 13(b), the adaptive technique adjusts accordingly by sending\nmore fine-grain summaries at higher query rates (in response to the\nhigher false hit rate), and fewer, coarse-grain summaries at lower\nquery rates.\n7. Related Work\nIn this section, we review prior work on storage and indexing\ntechniques for sensor networks. While our work addresses both\nproblems jointly, much prior work has considered them in isolation.\nThe problem of archival storage of sensor data has received\nlimited attention in sensor network literature. ELF [7] is a\nlogstructured file system for local storage on flash memory that\nprovides load leveling and Matchbox is a simple file system that is\npackaged with the TinyOS distribution [14]. Both these systems\nfocus on local storage, whereas our focus is both on storage at the\nremote sensors as well as providing a unified view of distributed\ndata across all such local archives. Multi-resolution storage [9] is\nintended for in-network storage and search in systems where there\nis significant data in comparison to storage resources. In contrast,\nTSAR addresses the problem of archival storage in two-tier systems\nwhere sufficient resources can be placed at the edge sensors. The\nRISE platform [21] being developed as part of the NODE project\nat UCR addresses the issues of hardware platform support for large\namounts of storage in remote sensor nodes, but not the indexing\nand querying of this data.\nIn order to efficiently access a distributed sensor store, an index\nneeds to be constructed of the data. Early work on sensor networks\nsuch as Directed Diffusion [17] assumes a system where all useful\nsensor data was stored locally at each sensor, and spatially scoped\nqueries are routed using geographic co-ordinates to locations where\nthe data is stored. Sources publish the events that they detect, and\nsinks with interest in specific events can subscribe to these events.\nThe Directed Diffusion substrate routes queries to specific locations\n49\nif the query has geographic information embedded in it (e.g.: find\ntemperature in the south-west quadrant), and if not, the query is\nflooded throughout the network.\nThese schemes had the drawback that for queries that are not\ngeographically scoped, search cost (O(n) for a network of n nodes)\nmay be prohibitive in large networks with frequent queries.\nLocal storage with in-network indexing approaches address this\nissue by constructing indexes using frameworks such as Geographic\nHash Tables [24] and Quad Trees [9]. Recent research has seen\na growing body of work on data indexing schemes for sensor\nnetworks[26][11][18]. One such scheme is DCS [26], which provides\na hash function for mapping from event name to location. DCS\nconstructs a distributed structure that groups events together\nspatially by their named type. Distributed Index of Features in\nSensornets (DIFS [11]) and Multi-dimensional Range Queries in Sensor\nNetworks (DIM [18]) extend the data-centric storage approach to\nprovide spatially distributed hierarchies of indexes to data.\nWhile these approaches advocate in-network indexing for sensor\nnetworks, we believe that indexing is a task that is far too\ncomplicated to be performed at the remote sensor nodes since it involves\nmaintaining significant state and large tables. TSAR provides a\nbetter match between resource requirements of storage and indexing\nand the availability of resources at different tiers. Thus complex\noperations such as indexing and managing metadata are performed\nat the proxies, while storage at the sensor remains simple.\nIn addition to storage and indexing techniques specific to sensor\nnetworks, many distributed, peer-to-peer and spatio-temporal index\nstructures are relevant to our work. DHTs [25] can be used for\nindexing events based on their type, quad-tree variants such as\nRtrees [12] can be used for optimizing spatial searches, and K-D\ntrees [2] can be used for multi-attribute search. While this paper\nfocuses on building an ordered index structure for range queries, we\nwill explore the use of other index structures for alternate queries\nover sensor data.\n8. Conclusions\nIn this paper, we argued that existing sensor storage systems\nare designed primarily for flat hierarchies of homogeneous sensor\nnodes and do not fully exploit the multi-tier nature of emerging\nsensor networks. We presented the design of TSAR, a fundamentally\ndifferent storage architecture that envisions separation of data from\nmetadata by employing local storage at the sensors and distributed\nindexing at the proxies. At the proxy tier, TSAR employs a novel\nmulti-resolution ordered distributed index structure, the Sparse\nInterval Skip Graph, for efficiently supporting spatio-temporal and\nrange queries. At the sensor tier, TSAR supports energy-aware\nadaptive summarization that can trade-off the energy cost of\ntransmitting metadata to the proxies against the overhead of false hits\nresulting from querying a coarser resolution index structure. We\nimplemented TSAR in a two-tier sensor testbed comprising\nStargatebased proxies and Mote-based sensors. Our experimental\nevaluation of TSAR demonstrated the benefits and feasibility of\nemploying our energy-efficient low-latency distributed storage architecture\nin multi-tier sensor networks.\n9. REFERENCES\n[1] James Aspnes and Gauri Shah. Skip graphs. In Fourteenth Annual ACM-SIAM\nSymposium on Discrete Algorithms, pages 384-393, Baltimore, MD, USA,\n12-14 January 2003.\n[2] Jon Louis Bentley. Multidimensional binary search trees used for associative\nsearching. Commun. ACM, 18(9):509-517, 1975.\n[3] Philippe Bonnet, J. E. Gehrke, and Praveen Seshadri. Towards sensor database\nsystems. In Proceedings of the Second International Conference on Mobile\nData Management., January 2001.\n[4] Chipcon. CC2420 2.4 GHz IEEE 802.15.4 / ZigBee-ready RF transceiver, 2004.\n[5] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein.\nIntroduction to Algorithms. The MIT Press and McGraw-Hill, second edition\nedition, 2001.\n[6] Adina Crainiceanu, Prakash Linga, Johannes Gehrke, and Jayavel\nShanmugasundaram. Querying Peer-to-Peer Networks Using P-Trees.\nTechnical Report TR2004-1926, Cornell University, 2004.\n[7] Hui Dai, Michael Neufeld, and Richard Han. ELF: an efficient log-structured\nflash file system for micro sensor nodes. In SenSys \"04: Proceedings of the 2nd\ninternational conference on Embedded networked sensor systems, pages\n176-187, New York, NY, USA, 2004. ACM Press.\n[8] Peter Desnoyers, Deepak Ganesan, Huan Li, and Prashant Shenoy. PRESTO: A\npredictive storage architecture for sensor networks. In Tenth Workshop on Hot\nTopics in Operating Systems (HotOS X)., June 2005.\n[9] Deepak Ganesan, Ben Greenstein, Denis Perelyubskiy, Deborah Estrin, and\nJohn Heidemann. An evaluation of multi-resolution storage in sensor networks.\nIn Proceedings of the First ACM Conference on Embedded Networked Sensor\nSystems (SenSys)., 2003.\n[10] L. Girod, T. Stathopoulos, N. Ramanathan, J. Elson, D. Estrin, E. Osterweil,\nand T. Schoellhammer. A system for simulation, emulation, and deployment of\nheterogeneous sensor networks. In Proceedings of the Second ACM Conference\non Embedded Networked Sensor Systems, Baltimore, MD, 2004.\n[11] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and S. Shenker. DIFS: A\ndistributed index for features in sensor networks. Elsevier Journal of ad-hoc\nNetworks, 2003.\n[12] Antonin Guttman. R-trees: a dynamic index structure for spatial searching. In\nSIGMOD \"84: Proceedings of the 1984 ACM SIGMOD international\nconference on Management of data, pages 47-57, New York, NY, USA, 1984.\nACM Press.\n[13] Nicholas Harvey, Michael B. Jones, Stefan Saroiu, Marvin Theimer, and Alec\nWolman. Skipnet: A scalable overlay network with practical locality properties.\nIn In proceedings of the 4th USENIX Symposium on Internet Technologies and\nSystems (USITS \"03), Seattle, WA, March 2003.\n[14] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David Culler, and\nKristofer Pister. System architecture directions for networked sensors. In\nProceedings of the Ninth International Conference on Architectural Support for\nProgramming Languages and Operating Systems (ASPLOS-IX), pages 93-104,\nCambridge, MA, USA, November 2000. ACM.\n[15] Atmel Inc. 4-megabit 2.5-volt or 2.7-volt DataFlash AT45DB041B, 2005.\n[16] Samsung Semiconductor Inc. K9W8G08U1M, K9K4G08U0M: 512M x 8 bit /\n1G x 8 bit NAND flash memory, 2003.\n[17] Chalermek Intanagonwiwat, Ramesh Govindan, and Deborah Estrin. Directed\ndiffusion: A scalable and robust communication paradigm for sensor networks.\nIn Proceedings of the Sixth Annual International Conference on Mobile\nComputing and Networking, pages 56-67, Boston, MA, August 2000. ACM\nPress.\n[18] Xin Li, Young-Jin Kim, Ramesh Govindan, and Wei Hong. Multi-dimensional\nrange queries in sensor networks. In Proceedings of the First ACM Conference\non Embedded Networked Sensor Systems (SenSys)., 2003. to appear.\n[19] Witold Litwin, Marie-Anne Neimat, and Donovan A. Schneider. RP*: A family\nof order preserving scalable distributed data structures. In VLDB \"94:\nProceedings of the 20th International Conference on Very Large Data Bases,\npages 342-353, San Francisco, CA, USA, 1994.\n[20] Samuel Madden, Michael Franklin, Joseph Hellerstein, and Wei Hong. TAG: a\ntiny aggregation service for ad-hoc sensor networks. In OSDI, Boston, MA,\n2002.\n[21] A. Mitra, A. Banerjee, W. Najjar, D. Zeinalipour-Yazti, D.Gunopulos, and\nV. Kalogeraki. High performance, low power sensor platforms featuring\ngigabyte scale storage. In SenMetrics 2005: Third International Workshop on\nMeasurement, Modeling, and Performance Analysis of Wireless Sensor\nNetworks, July 2005.\n[22] J. Polastre, J. Hill, and D. Culler. Versatile low power media access for wireless\nsensor networks. In Proceedings of the Second ACM Conference on Embedded\nNetworked Sensor Systems (SenSys), November 2004.\n[23] William Pugh. Skip lists: a probabilistic alternative to balanced trees. Commun.\nACM, 33(6):668-676, 1990.\n[24] S. Ratnasamy, D. Estrin, R. Govindan, B. Karp, L. Yin S. Shenker, and F. Yu.\nData-centric storage in sensornets. In ACM First Workshop on Hot Topics in\nNetworks, 2001.\n[25] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and S. Shenker. A scalable\ncontent addressable network. In Proceedings of the 2001 ACM SIGCOMM\nConference, 2001.\n[26] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin, R. Govindan, and S. Shenker.\nGHT - a geographic hash-table for data-centric storage. In First ACM\nInternational Workshop on Wireless Sensor Networks and their Applications,\n2002.\n[27] N. Xu, E. Osterweil, M. Hamilton, and D. Estrin.\nhttp://www.lecs.cs.ucla.edu/\u02dcnxu/ess/. James Reserve Data.\n50", "keywords": "analysis;archival storage;datum separation;sensor datum;index method;homogeneous architecture;flash storage;metada;distributed index structure;interval skip graph;interval tree;spatial scoping;geographic hash table;separation of datum;flooding;multi-tier sensor network;wireless sensor network;archive"} {"name": "train_C-48", "title": "Multi-dimensional Range Queries in Sensor Networks\u2217", "abstract": "In many sensor networks, data or events are named by attributes. Many of these attributes have scalar values, so one natural way to query events of interest is to use a multidimensional range query. An example is: List all events whose temperature lies between 50\u25e6 and 60\u25e6 , and whose light levels lie between 10 and 15. Such queries are useful for correlating events occurring within the network. In this paper, we describe the design of a distributed index that scalably supports multi-dimensional range queries. Our distributed index for multi-dimensional data (or DIM) uses a novel geographic embedding of a classical index data structure, and is built upon the GPSR geographic routing algorithm. Our analysis reveals that, under reasonable assumptions about query distributions, DIMs scale quite well with network size (both insertion and query costs scale as O( \u221a N)). In detailed simulations, we show that in practice, the insertion and query costs of other alternatives are sometimes an order of magnitude more than the costs of DIMs, even for moderately sized network. Finally, experiments on a small scale testbed validate the feasibility of DIMs.", "fulltext": "1. INTRODUCTION\nIn wireless sensor networks, data or events will be named\nby attributes [15] or represented as virtual relations in a\ndistributed database [18, 3]. Many of these attributes will\nhave scalar values: e.g., temperature and light levels, soil\nmoisture conditions, etc. In these systems, we argue, one\nnatural way to query for events of interest will be to use\nmulti-dimensional range queries on these attributes. For\nexample, scientists analyzing the growth of marine\nmicroorganisms might be interested in events that occurred within\ncertain temperature and light conditions: List all events\nthat have temperatures between 50\u25e6\nF and 60\u25e6\nF, and light\nlevels between 10 and 20.\nSuch range queries can be used in two distinct ways. They\ncan help users efficiently drill-down their search for events of\ninterest. The query described above illustrates this, where\nthe scientist is presumably interested in discovering, and\nperhaps mapping the combined effect of temperature and\nlight on the growth of marine micro-organisms. More\nimportantly, they can be used by application software running\nwithin a sensor network for correlating events and triggering\nactions. For example, if in a habitat monitoring application,\na bird alighting on its nest is indicated by a certain range\nof thermopile sensor readings, and a certain range of\nmicrophone readings, a multi-dimensional range query on those\nattributes enables higher confidence detection of the arrival\nof a flock of birds, and can trigger a system of cameras.\nIn traditional database systems, such range queries are\nsupported using pre-computed indices. Indices trade-off some\ninitial pre-computation cost to achieve a significantly more\nefficient querying capability. For sensor networks, we\nassert that a centralized index for multi-dimensional range\nqueries may not be feasible for energy-efficiency reasons (as\nwell as the fact that the access bandwidth to this central\nindex will be limited, particularly for queries emanating\nfrom within the network). Rather, we believe, there will\nbe situations when it is more appropriate to build an\ninnetwork distributed data structure for efficiently answering\nmulti-dimensional range queries.\nIn this paper, we present just such a data structure, that\nwe call a DIM1\n. DIMs are inspired by classical database\nindices, and are essentially embeddings of such indices within\nthe sensor network. DIMs leverage two key ideas: in-network\n1\nDistributed Index for Multi-dimensional data.\n63\ndata centric storage, and a novel locality-preserving\ngeographic hash (Section 3). DIMs trace their lineage to\ndatacentric storage systems [23]. The underlying mechanism in\nthese systems allows nodes to consistently hash an event to\nsome location within the network, which allows efficient\nretrieval of events. Building upon this, DIMs use a technique\nwhereby events whose attribute values are close are likely\nto be stored at the same or nearby nodes. DIMs then use\nan underlying geographic routing algorithm (GPSR [16]) to\nroute events and queries to their corresponding nodes in an\nentirely distributed fashion.\nWe discuss the design of a DIM, presenting algorithms for\nevent insertion and querying, for maintaining a DIM in the\nevent of node failure, and for making DIMs robust to data or\npacket loss (Section 3). We then extensively evaluate DIMs\nusing analysis (Section 4), simulation (Section 5), and actual\nimplementation (Section 6). Our analysis reveals that,\nunder reasonable assumptions about query distributions, DIMs\nscale quite well with network size (both insertion and query\ncosts scale as O(\n\u221a\nN)). In detailed simulations, we show\nthat in practice, the event insertion and querying costs of\nother alternatives are sometimes an order of magnitude the\ncosts of DIMs, even for moderately sized network.\nExperiments on a small scale testbed validate the feasibility of\nDIMs (Section 6). Much work remains, including efficient\nsupport for skewed data distributions, existential queries,\nand node heterogeneity.\nWe believe that DIMs will be an essential, but perhaps\nnot necessarily the only, distributed data structure\nsupporting efficient queries in sensor networks. DIMs will be part\nof a suite of such systems that enable feature extraction [7],\nsimple range querying [10], exact-match queries [23], or\ncontinuous queries [15, 18]. All such systems will likely be\nintegrated to a sensor network database system such as\nTinyDB [17]. Application designers could then choose the\nappropriate method of information access. For instance,\na fire tracking application would use DIM to detect the\nhotspots, and would then use mechanisms that enable\ncontinuous queries [15, 18] to track the spatio-temporal progress\nof the hotspots. Finally, we note that DIMs are applicable\nnot just to sensor networks, but to other deeply distributed\nsystems (embedded networks for home and factory\nautomation) as well.\n2. RELATED WORK\nThe basic problem that this paper addresses -\nmultidimensional range queries - is typically solved in database\nsystems using indexing techniques. The database\ncommunity has focused mostly on centralized indices, but distributed\nindexing has received some attention in the literature.\nIndexing techniques essentially trade-off some data\ninsertion cost to enable efficient querying. Indexing has, for long,\nbeen a classical research problem in the database\ncommunity [5, 2]. Our work draws its inspiration from the class\nof multi-key constant branching index structures,\nexemplified by k-d trees [2], where k represents the dimensionality\nof the data space. Our approach essentially represents a\ngeographic embedding of such structures in a sensor field.\nThere is one important difference. The classical indexing\nstructures are data-dependent (as are some indexing schemes\nthat use locality preserving hashes, and developed in the\ntheory literature [14, 8, 13]). The index structure is decided\nnot only by the data, but also by the order in which data\nis inserted. Our current design is not data dependent.\nFinally, tangentially related to our work is the class of spatial\nindexing systems [21, 6, 11].\nWhile there has been some work on distributed indexing,\nthe problem has not been extensively explored. There\nexist distributed indices of a restricted kind-those that allow\nexact match or partial prefix match queries. Examples of\nsuch systems, of course, are the Internet Domain Name\nSystem, and the class of distributed hash table (DHT) systems\nexemplified by Freenet[4], Chord[24], and CAN[19]. Our\nwork is superficially similar to CAN in that both construct\na zone-based overlay atop of the underlying physical\nnetwork. The underlying details make the two systems very\ndifferent: CAN\"s overlay is purely logical while our overlay\nis consistent with the underlying physical topology. More\nrecent work in the Internet context has addressed support\nfor range queries in DHT systems [1, 12], but it is unclear if\nthese directly translate to the sensor network context.\nSeveral research efforts have expressed the vision of a\ndatabase interface to sensor networks [9, 3, 18], and there\nare examples of systems that contribute to this vision [18,\n3, 17]. Our work is similar in spirit to this body of\nliterature. In fact, DIMs could become an important component\nof a sensor network database system such as TinyDB [17].\nOur work departs from prior work in this area in two\nsignificant respects. Unlike these approaches, in our work the data\ngenerated at a node are hashed (in general) to different\nlocations. This hashing is the key to scaling multi-dimensional\nrange searches. In all the other systems described above,\nqueries are flooded throughout the network, and can\ndominate the total cost of the system. Our work avoids query\nflooding by an appropriate choice of hashing. Madden et\nal. [17] also describe a distributed index, called Semantic\nRouting Trees (SRT). This index is used to direct queries\nto nodes that have detected relevant data. Our work\ndiffers from SRT in three key aspects. First, SRT is built on\nsingle attributes while DIM supports mulitple attributes.\nSecond, SRT constructs a routing tree based on historical\nsensor readings, and therefore works well only for\nslowlychanging sensor values. Finally, in SRT queries are issued\nfrom a fixed node while in DIM queries can be issued from\nany node.\nA similar differentiation applies with respect to work on\ndata-centric routing in sensor networks [15, 25], where data\ngenerated at a node is assumed to be stored at the node,\nand queries are either flooded throughout the network [15],\nor each source sets up a network-wide overlay announcing its\npresence so that mobile sinks can rendezvous with sources\nat the nearest node on the overlay [25]. These approaches\nwork well for relatively long-lived queries.\nFinally, our work is most close related to data-centric\nstorage [23] systems, which include geographic hash-tables\n(GHTs) [20], DIMENSIONS [7], and DIFS [10].In a GHT,\ndata is hashed by name to a location within the network,\nenabling highly efficient rendezvous. GHTs are built upon the\nGPSR [16] protocol and leverage some interesting properties\nof that protocol, such as the ability to route to a node nearest\nto a given location. We also leverage properties in GPSR (as\nwe describe later), but we use a locality-preserving hash to\nstore data, enabling efficient multi-dimensional range queries.\nDIMENSIONS and DIFS can be thought of as using the\nsame set of primitives as GHT (storage using consistent\nhashing), but for different ends: DIMENSIONS allows\ndrill64\ndown search for features within a sensor network, while\nDIFS allows range queries on a single key in addition to\nother operations.\n3. THE DESIGN OF DIMS\nMost sensor networks are deployed to collect data from\nthe environment. In these networks, nodes (either\nindividually or collaboratively) will generate events. An event\ncan generally be described as a tuple of attribute values,\nA1, A2, \u00b7 \u00b7 \u00b7 , Ak , where each attribute Ai represents a\nsensor reading, or some value corresponding to a detection\n(e.g., a confidence level). The focus of this paper is the\ndesign of systems to efficiently answer multi-dimensional range\nqueries of the form: x1 \u2212 y1, x2 \u2212 y2, \u00b7 \u00b7 \u00b7 , xk \u2212 yk . Such a\nquery returns all events whose attribute values fall into the\ncorresponding ranges. Notice that point queries, i.e., queries\nthat ask for events with specified values for each attribute,\nare a special case of range queries.\nAs we have discussed in Section 1, range queries can\nenable efficient correlation and triggering within the network.\nIt is possible to implement range queries by flooding a query\nwithin the network. However, as we show in later sections,\nthis alternative can be inefficient, particularly as the system\nscales, and if nodes within the network issue such queries\nrelatively frequently. The other alternative, sending all events\nto an external storage node results in the access link being\na bottleneck, especially if nodes within the network issue\nqueries. Shenker et al. [23] also make similar arguments with\nrespect to data-centric storage schemes in general; DIMs are\nan instance of such schemes.\nThe system we present in this paper, the DIM, relies upon\ntwo foundations: a locality-preserving geographic hash, and\nan underlying geographic routing scheme.\nThe key to resolving range queries efficiently is data\nlocality: i.e., events with comparable attribute values are stored\nnearby. The basic insight underlying DIM is that data\nlocality can be obtained by a locality-preserving geographic\nhash function. Our geographic hash function finds a\nlocalitypreserving mapping from the multi-dimensional space\n(described by the set of attributes) to a 2-d geographic space;\nthis mapping is inspired by k-d trees [2] and is described\nlater. Moreover, each node in the network self-organizes\nto claim part of the attribute space for itself (we say that\neach node owns a zone), so events falling into that space are\nrouted to and stored at that node.\nHaving established the mapping, and the zone structure,\nDIMs use a geographic routing algorithm previously\ndeveloped in the literature to route events to their corresponding\nnodes, or to resolve queries. This algorithm, GPSR [16],\nessentially enables the delivery of a packet to a node at a\nspecified location. The routing mechanism is simple: when\na node receives a packet destined to a node at location X, it\nforwards the packet to the neighbor closest to X. In GPSR,\nthis is called greedy-mode forwarding. When no such\nneighbor exists (as when there exists a void in the network), the\nnode starts the packet on a perimeter mode traversal,\nusing the well known right-hand rule to circumnavigate voids.\nGPSR includes efficient techniques for perimeter traversal\nthat are based on graph planarization algorithms amenable\nto distributed implementation.\nFor all of this to work, DIMs make two assumptions that\nare consistent with the literature [23]. First, all nodes know\nthe approximate geographic boundaries of the network. These\nboundaries may either be configured in nodes at the time of\ndeployment, or may be discovered using a simple protocol.\nSecond, each node knows its geographic location. Node\nlocations can be automatically determined by a localization\nsystem or by other means.\nAlthough the basic idea of DIMs may seem\nstraightforward, it is challenging to design a completely distributed\ndata structure that must be robust to packet losses and\nnode failures, yet must support efficient query distribution\nand deal with communication voids and obstacles. We now\ndescribe the complete design of DIMs.\n3.1 Zones\nThe key idea behind DIMs, as we have discussed, is a\ngeographic locality-preserving hash that maps a multi-attribute\nevent to a geographic zone. Intuitively, a zone is a\nsubdivision of the geographic extent of a sensor field.\nA zone is defined by the following constructive procedure.\nConsider a rectangle R on the x-y plane. Intuitively, R is\nthe bounding rectangle that contains all sensors withing the\nnetwork. We call a sub-rectangle Z of R a zone, if Z is\nobtained by dividing R k times, k \u2265 0, using a procedure\nthat satisfies the following property:\nAfter the i-th division, 0 \u2264 i \u2264 k, R is\npartitioned into 2i\nequal sized rectangles. If i is an\nodd (even) number, the i-th division is parallel\nto the y-axis (x-axis).\nThat is, the bounding rectangle R is first sub-divided into\ntwo zones at level 0 by a vertical line that splits R into two\nequal pieces, each of these sub-zones can be split into two\nzones at level 1 by a horizontal line, and so on. We call the\nnon-negative integer k the level of zone Z, i.e. level(Z) = k.\nA zone can be identified either by a zone code code(Z)\nor by an address addr(Z). The code code(Z) is a 0-1 bit\nstring of length level(Z), and is defined as follows. If Z lies\nin the left half of R, the first (from the left) bit of code(Z)\nis 0, else 1. If Z lies in the bottom half of R, the second\nbit of code(Z) is 0, else 1. The remaining bits of code(Z)\nare then recursively defined on each of the four quadrants of\nR. This definition of the zone code matches the definition\nof zones given above, encoding divisions of the sensor field\ngeography by bit strings. Thus, in Figure 2, the zone in the\ntop-right corner of the rectangle R has a zone code of 1111.\nNote that the zone codes collectively define a zone tree such\nthat individual zones are at the leaves of this tree.\nThe address of a zone Z, addr(Z), is defined to be the\ncentroid of the rectangle defined by Z. The two representations\nof a zone (its code and its address) can each be computed\nfrom the other, assuming the level of the zone is known.\nTwo zones are called sibling zones if their zone codes are\nthe same except for the last bit. For example, if code(Z1) =\n01101 and code(Z2) = 01100, then Z1 and Z2 are sibling\nzones. The sibling subtree of a zone is the subtree rooted\nat the left or right sibling of the zone in the zone tree. We\nuniquely define a backup zone for each zone as follows: if\nthe sibling subtree of the zone is on the left, the backup\nzone is the right-most zone in the sibling subtree;\notherwise, the backup zone is the left-most zone in the sibling\nsubtree. For a zone Z, let p be the first level(Z) \u2212 1 digits\nof code(Z). Let backup(Z) be the backup zone of zone Z.\nIf code(Z) = p1, code(backup(Z)) = p01\u2217 with the most\nnumber of trailing 1\"s (\u2217 means 0 or 1 occurrences). If\n65\ncode(Z) = p0, code(backup(Z)) = p10\u2217 with the most\nnumber of trailing 0\"s.\n3.2 Associating Zones with Nodes\nOur definition of a zone is independent of the actual\ndistribution of nodes in the sensor field, and only depends upon\nthe geographic extent (the bounding rectangle) of the sensor\nfield. Now we describe how zones are mapped to nodes.\nConceptually, the sensor field is logically divided into zones\nand each zone is assigned to a single node. If the sensor\nnetwork were deployed in a grid-like (i.e., very regular) fashion,\nthen it is easy to see that there exists a k such that each\nnode maps into a distinct level-k zone. In general, however,\nthe node placements within a sensor field are likely to be less\nregular than the grid. For some k, some zones may be empty\nand other zones might have more than one node situated\nwithin them. One alternative would have been to choose\na fixed k for the overall system, and then associate nodes\nwith the zones they are in (and if a zone is empty, associate\nthe nearest node with it, for some definition of nearest).\nBecause it makes our overall query routing system simpler,\nwe allow nodes in a DIM to map to different-sized zones.\nTo precisely understand the associations between zones\nand nodes, we define the notion of zone ownership. For any\ngiven placement of network nodes, consider a node A. Let\nZA to be the largest zone that includes only node A and no\nother node. Then, we say that A owns ZA. Notice that this\ndefinition of ownership may leave some sections of the sensor\nfield un-associated with a node. For example, in Figure 2,\nthe zone 110 does not contain any nodes and would not have\nan owner. To remedy this, for any empty zone Z, we define\nthe owner to be the owner of backup(Z). In our example,\nthat empty zone\"s owner would also be the node that owns\n1110, its backup zone.\nHaving defined the association between nodes and zones,\nthe next problem we tackle is: given a node placement, does\nthere exist a distributed algorithm that enables each node\nto determine which zones it owns, knowing only the overall\nboundary of the sensor network? In principle, this should\nbe relatively straightforward, since each node can simply\ndetermine the location of its neighbors, and apply simple\ngeometric methods to determine the largest zone around it\nsuch that no other node resides in that zone. In practice,\nhowever, communication voids and obstacles make the\nalgorithm much more challenging. In particular, resolving the\nownership of zones that do not contain any nodes is\ncomplicated. Equally complicated is the case where the zone\nof a node is larger than its communication radius and the\nnode cannot determine the boundaries of its zone by local\ncommunication alone.\nOur distributed zone building algorithm defers the\nresolution of such zones until when either a query is initiated, or\nwhen an event is inserted. The basic idea behind our\nalgorithm is that each node tentatively builds up an idea of the\nzone it resides in just by communicating with its neighbors\n(remembering which boundaries of the zone are undecided\nbecause there is no radio neighbor that can help resolve that\nboundary). These undecided boundaries are later resolved\nby a GPSR perimeter traversal when data messages are\nactually routed.\nWe now describe the algorithm, and illustrate it using\nexamples. In our algorithm, each node uses an array bound[0..3]\nto maintain the four boundaries of the zone it owns\n(rememFigure 1: A network, where circles represent sensor\nnodes and dashed lines mark the network boundary.\n1111\n011\n00\n110\n100\n101\n1110\n010\nFigure 2: The zone code and boundaries.\n0 1\n0 1\n10\n10\n1 1\n10\n00\nFigure 3: The Corresponding Zone Tree\nber that in this algorithm, the node only tries to determine\nthe zone it resides in, not the other zones it might own\nbecause those zones are devoid of nodes). When a node\nstarts up, each node initializes this array to be the network\nboundary, i.e., initially each node assumes its zone contains\nthe whole network. The zone boundary algorithm now\nrelies upon GPSR\"s beacon messages to learn the locations of\nneighbors within radio range. Upon hearing of such a\nneighbor, the node calls the algorithm in Figure 4 to update its\nzone boundaries and its code accordingly. In this algorithm,\nwe assume that A is the node at which the algorithm is\nexecuted, ZA is its zone, and a is a newly discovered neighbor\nof A. (Procedure Contain(ZA, a) is used to decide if node\na is located within the current zone boundaries of node A).\nUsing this algorithm, then, each node can independently\nand asynchronously decide its own tentative zone based on\nthe location of its neighbors. Figure 2 illustrates the results\nof applying this algorithm for the network in Figure 1.\nFigure 3 describes the corresponding zone tree. Each zone\nresides at a leaf node and the code of a zone is the path from\nthe root to the zone if we represent the branch to the left\n66\nBuild-Zone(a)\n1 while Contain(ZA, a)\n2 do if length(code(ZA)) mod 2 = 0\n3 then new bound \u2190 (bound[0] + bound[1])/2\n4 if A.x < new bound\n5 then bound[1] \u2190 new bound\n6 else bound[0] \u2190 new bound\n7 else new bound \u2190 (bound[2] + bound[3])/2\n8 if A.y < new bound\n9 then bound[3] \u2190 new bound\n10 else bound[2] \u2190 new bound\n11 Update zone code code(ZA)\nFigure 4: Zone Boundary Determination, where A.x\nand A.y represent the geographic coordinate of node\nA.\nInsert-Event(e)\n1 c \u2190 Encode(e)\n2 if Contain(ZA, c) = true and is Internal() = true\n3 then Store e and exit\n4 Send-Message(c, e)\nSend-Message(c, m)\n1 if \u2203 neighbor Y, Closer(Y, owner(m), m) = true\n2 then addr(m) \u2190 addr(Y )\n3 else if length(c) > length(code(m))\n4 then Update code(m) and addr(m)\n5 source(m) \u2190 caller\n6 if is Owner(msg) = true\n7 then owner(m) \u2190 caller\"s code\n8 Send(m)\nFigure 5: Inserting an event in a DIM. Procedure\nCloser(A, B, m) returns true if code(A) is closer to\ncode(m) than code(B). source(m) is used to set the source\naddress of message m.\nchild by 0 and the branch to the right child by 1. This binary\ntree forms the index that we will use in the following event\nand query processing procedures.\nWe see that the zone sizes are different and depend on\nthe local densities and so are the lengths of zone codes for\ndifferent nodes. Notice that in Figure 2, there is an empty\nzone whose code should be 110. In this case, if the node in\nzone 1111 can only hear the node in zone 1110, it sets its\nboundary with the empty zone to undecided, because it did\nnot hear from any neighboring nodes from that direction.\nAs we have mentioned before, the undecided boundaries are\nresolved using GPSR\"s perimeter mode when an event is\ninserted, or a query sent. We describe event insertion in the\nnext step.\nFinally, this description does not describe how a node\"s\nzone codes are adjusted when neighboring nodes fail, or new\nnodes come up. We return to this in Section 3.5.\n3.3 Inserting an Event\nIn this section, we describe how events are inserted into\na DIM. There are two algorithms of interest: a consistent\nhashing technique for mapping an event to a zone, and a\nrouting algorithm for storing the event at the appropriate\nzone. As we shall see, these two algorithms are inter-related.\n3.3.1 Hashing an Event to a Zone\nIn Section 3.1, we described a recursive tessellation of\nthe geographic extent of a sensor field. We now describe\na consistent hashing scheme for a DIM that supports range\nqueries on m distinct attributes2\nLet us denote these attributes A1 . . . Am. For simplicity,\nassume for now that the depth of every zone in the network\nis k, k is a multiple of m, and that this value of k is known\nto every node. We will relax this assumption shortly.\nFurthermore, for ease of discussion, we assume that all attribute\nvalues have been normalized to be between 0 and 1.\nOur hashing scheme assigns a k bit zone code to an event\nas follows. For i between 1 and m, if Ai < 0.5, the i-th\nbit of the zone code is assigned 0, else 1. For i between\nm + 1 and 2m, if Ai\u2212m < 0.25 or Ai\u2212m \u2208 [0.5, 0.75), the\ni-th bit of the zone is assigned 0, else 1, because the next\nlevel divisions are at 0.25 and 0.75 which divide the ranges\nto [0, 0.25), [0.25, 0.5), [0.5, 0.75), and [0.75, 1). We repeat\nthis procedure until all k bits have been assigned. As an\nexample, consider event E = 0.3, 0.8 . For this event, the\n5-bit zone code is code(ZA) = 01110.\nEssentially, our hashing scheme uses the values of the\nattributes in round-robin fashion on the zone tree (such as\nthe one in Figure 3), in order to map an m-attribute event\nto a zone code. This is reminiscent of k-d trees [2], but\nis quite different from that data structure: zone trees are\nspatial embeddings and do not incorporate the re-balancing\nalgorithms in k-d trees.\nIn our design of DIMs, we do not require nodes to have\nzone codes of the same length, nor do we expect a node to\nknow the zone codes of other nodes. Rather, suppose the\nencoding node is A and its own zone code is of length kA.\nThen, given an event E, node A only hashes E to a zone\ncode of length kA. We denote the zone code assigned to an\nevent E by code(E). As we describe below, as the event is\nrouted, code(E) is refined by intermediate nodes. This lazy\nevaluation of zone codes allows different nodes to use\ndifferent length zone codes without any explicit coordination.\n3.3.2 Routing an Event to its Owner\nThe aim of hashing an event to a zone code is to store the\nevent at the node within the network node that owns that\nzone. We call this node the owner of the event. Consider\nan event E that has just been generated at a node A. After\nencoding event E, node A compares code(E) with code(A).\nIf the two are identical, node A store event E locally;\notherwise, node A will attempt to route the event to its owner.\nTo do this, note that code(E) corresponds to some zone\nZ , which is A\"s current guess for the zone at which event E\nshould be stored. A now invokes GPSR to send a message\nto addr(Z ) (the centroid of Z , Section 3.1). The message\ncontains the event E, code(E), and the target geographic\nlocation for storing the event. In the message, A also marks\nitself as the owner of event E. As we will see later, the\nguessed zone Z , the address addr(Z ), and the owner of\nE, all of them contained in the message, will be refined by\nintermediate forwarding nodes.\nGPSR now delivers this message to the next hop towards\naddr(Z ) from A. This next hop node (call it B) does not\nimmediately forward the message. Rather, it attempts to\ncom2\nDIM does not assume that all nodes are homogeneous in\nterms of the sensors they have. Thus, in an m dimensional\nDIM, a node that does not possess all m sensors can use NULL\nvalues for the corresponding readings. DIM treats NULL as\nan extreme value for range comparisons. As an aside, a\nnetwork may have many DIM instances running concurrently.\n67\npute a new zone code for E to get a new code codenew(E).\nB will update the code contained in the message (and also\nthe geographic destination of the message) if codenew(E) is\nlonger than the event code in the message. In this manner,\nas the event wends its way to its owner, its zone code gets\nrefined. Now, B compares its own code code(B) against the\nowner code owner(E) contained in the incoming message.\nIf code(B) has a longer match with code(E) than the\ncurrent owner owner(E), then B sets itself to be the current\nowner of E, meaning that if nobody is eligible to store E,\nthen B will store the event (we shall see how this happens\nnext). If B\"s zone code does not exactly match code(E), B\nwill invoke GPSR to deliver E to the next hop.\n3.3.3 Resolving undecided zone boundaries during\ninsertion\nSuppose that some node, say C, finds itself to be the\ndestination (or eventual owner) of an event E. It does so by\nnoticing that code code(C) equals code(E) after locally\nrecomputing a code for E. In that case, C stores E locally, but\nonly if all four of C\"s zone boundaries are decided. When\nthis condition holds, C knows for sure that no other nodes\nhave overlapped zones with it. In this case, we call C an\ninternal node.\nRecall, though, that because the zone discovery algorithm\nSection 3.2 only uses information from immediate neighbors,\none or more of C\"s boundaries may be undecided. If so, C\nassumes that some other nodes have a zone that overlaps\nwith its own, and sets out to resolve this overlap. To do\nthis, C now sets itself to be the owner of E and continues\nforwarding the message. Here we rely on GPSR\"s\nperimeter mode routing to probe around the void that causes the\nundecided boundary. Since the message starts from C and\nis destined for a geographic location near C, GPSR\nguarantees that the message will be delivered back to C if no\nother nodes will update the information in the message. If\nthe message comes back to C with itself to be the owner, C\ninfers that it must be the true owner of the zone and stores\nE locally.\nIf this does not happen, there are two possibilities. The\nfirst is that as the event traverses the perimeter, some\nintermediate node, say B whose zone overlaps with C\"s marks\nitself to be the owner of the event, but otherwise does not\nchange the event\"s zone code. This node also recognizes that\nits own zone overlaps with C\"s and initiates a message\nexchange which causes each of them to appropriately shrink\ntheir zone.\nFigures 6 through 8 show an example of this data-driven\nzone shrinking. Initially, both node A and node B have\nclaimed the same zone 0 because they are out of radio range\nof each other. Suppose that A inserts an event E = 0.4, 0.8, 0.9 .\nA encodes E to 0 and claims itself to be the owner of E.\nSince A is not an internal node, it sends out E, looking for\nother owner candidates of E. Once E gets to node B, B will\nsee in the message\"s owner field A\"s code that is the same as\nits own. B then shrinks its zone from 0 to 01 according to\nA\"s location which is also recorded in the message and send\na shrink request to A. Upon receiving this request, A also\nshrinks its zone from 0 to 00.\nA second possibility is if some intermediate node changes\nthe destination code of E to a more specific value (i.e.,\nlonger zone code). Let us label this node D. D now tries\nto initiate delivery to the centroid of the new zone. This\nA\nB\n0\n0\n110\n100\n1111\n1110\n101\nFigure 6: Nodes A and B have claimed the same zone.\nA\nB\n<0.4,0.8,0.9>\nFigure 7: An event/query message (filled arrows)\ntriggers zone shrinking (hollow arrows).\nA\nB\n01\n00\n110\n100\n1111\n1110\n101\nFigure 8: The zone layout after shrinking. Now node\nA and B have been mapped to different zones.\nmight result in a new perimeter walk that returns to D (if,\nfor example, D happens to be geographically closest to the\ncentroid of the zone). However, D would not be the owner\nof the event, which would still be C. In routing to the\ncentroid of this zone, the message may traverse the perimeter\nand return to D. Now D notices that C was the original\nowner, so it encapsulates the event and directs it to C. In\ncase that there indeed is another node, say X, that owns\nan overlapped zone with C, X will notice this fact by\nfinding in the message the same prefix of the code of one of\nits zones, but with a different geographic location from its\nown. X will shrink its zone to resolve the overlap. If X\"s\nzone is smaller than or equal to C\"s zone, X will also send\na shrink request to C. Once C receives a shrink request,\nit will reduce its zone appropriately and fix its undecided\nboundary. In this manner, the zone formation process is\nresolved on demand in a data-driven way.\n68\nThere are several interesting effects with respect to\nperimeter walking that arise in our algorithm. The first is that\nthere are some cases where an event insertion might cause\nthe entire outer perimeter of the network to be traversed3\n.\nFigure 6 also works as an example where the outer\nperimeter is traversed. Event E inserted by A will eventually be\nstored in node B. Before node B stores event E, if B\"s\nnominal radio range does not intersect the network boundary, it\nneeds to send out E again as A did, because B in this case\nis not an internal node. But if B\"s nominal radio range\nintersects the network boundary, it then has two choices. It\ncan assume that there will not be any nodes outside the\nnetwork boundary and so B is an internal node. This is an\naggressive approach. On the other hand, B can also make\na conservative decision assuming that there might be some\nother nodes it have not heard of yet. B will then force the\nmessage walking another perimeter before storing it.\nIn some situations, especially for large zones where the\nnode that owns a zone is far away from the centroid of the\nowned zone, there might exist a small perimeter around the\ndestination that does not include the owner of the zone. The\nevent will end up being stored at a different node than the\nreal owner. In order to deal with this problem, we add an\nextra operation in event forwarding, called efficient neighbor\ndiscovery. Before invoking GPSR, a node needs to check if\nthere exists a neighbor who is eligible to be the real owner of\nthe event. To do this, a node C, say, needs to know the zone\ncodes of its neighboring nodes. We deploy GPSR\"s\nbeaconing message to piggyback the zone codes for nodes. So by\nsimply comparing the event\"s code and neighbor\"s code, a\nnode can decide whether there exists a neighbor Y which\nis more likely to be the owner of event E. C delivers E\nto Y , which simply follows the decision making procedure\ndiscussed above.\n3.3.4 Summary and Pseudo-code\nIn summary, our event insertion procedure is designed to\nnicely interact with the zone discovery mechanism, and the\nevent hashing mechanism. The latter two mechanisms are\nkept simple, while the event insertion mechanism uses lazy\nevaluation at each hop to refine the event\"s zone code, and it\nleverages GPSR\"s perimeter walking mechanism to fix\nundecided zone boundaries. In Section 3.5, we address robustness\nof event insertion to packet loss or to node failures.\nFigure 5 shows the pseudo-code for inserting and\nforwarding an event e. In this pseudo code, we have omitted a\ndescription of the zone shrinking procedure. In the pseudo\ncode, procedure is Internal() is used to determine if the\ncaller is an internal node and procedure is Owner() is used\nto determine if the caller is more eligible to be the owner of\nthe event than is currently claimed owner as recorded in the\nmessage. Procedure Send-Message is used to send either\nan event message or a query message. If the message\ndestination address has been changed, the packet source address\nneeds also to be changed in order to avoid being dropped by\nGPSR, since GPSR does not allow a node to see the same\npacket in greedy mode twice.\n3\nThis happens less frequently than for GHTs, where\ninserting an event to a location outside the actual (but inside\nthe nominal) boundary of the network will always invoke an\nexternal perimeter walk.\n3.4 Resolving and Routing Queries\nDIMs support both point queries4\nand range queries.\nRouting a point query is identical to routing an event. Thus, the\nrest of this section details how range queries are routed.\nThe key challenge in routing zone queries is brought out\nby the following strawman design. If the entire network was\ndivided evenly into zones of depth k (for some pre-defined\nconstant k), then the querier (the node issuing the query)\ncould subdivide a given range query into the relevant\nsubzones and route individual requests to each of the zones.\nThis can be inefficient for large range queries and also hard\nto implement in our design where zone sizes are not\npredefined. Accordingly, we use a slightly different technique\nwhere a range query is initially routed to a zone\ncorresponding to the entire range, and is then progressively split into\nsmaller subqueries. We describe this algorithm here.\nThe first step of the algorithm is to map a range query to\na zone code prefix. Conceptually, this is easy; in a zone tree\n(Figure 3), there exists some node which contains the entire\nrange query in its sub-tree, and none of its children in the\ntree do. The initial zone code we choose for the query is the\nzone code corresponding to that tree node, and is a prefix of\nthe zone codes of all zones (note that these zones may not\nbe geographically contiguous) in the subtree. The querier\ncomputes the zone code of Q, denoted by code(Q) and then\nstarts routing a query to addr(code(Q)).\nUpon receiving a range query Q, a node A (where A is any\nnode on the query propagation path) divides it into multiple\nsmaller sized subqueries if there is an overlap between the\nzone of A, zone(A) and the zone code associated with Q,\ncode(Q). Our approach to split a query Q into subqueries\nis as follows. If the range of Q\"s first attribute contains\nthe value 0.5, A divides Q into two sub-queries one of whose\nfirst attribute ranges from 0 to 0.5, and the other from 0.5 to\n1. Then A decides the half that overlaps with its own zone.\nLet\"s call it QA. If QA does not exist, then A stops splitting;\notherwise, it continues splitting (using the second attribute\nrange) and recomputing QA until QA is small enough so\nthat it completely falls into zone(A) and hence A can now\nresolve it. For example, suppose that node A, whose code\nis 0110, is to split a range query Q = 0.3 \u2212 0.8, 0.6 \u2212 0.9 .\nThe splitting steps is shown in Figure 2. After splitting,\nwe obtain three smaller queries q0 = 0.3 \u2212 0.5, 0.6 \u2212 0.75 ,\nq1 = 0.3 \u2212 0.5, 0.75 \u2212 0.9 , and q2 = 0.5 \u2212 0.8, 0.6 \u2212 0.9 .\nThis splitting procedure is illustrated in Figure 9 which\nalso shows the codes of each subquery after splitting.\nA then replies to subquery q0 with data stored locally\nand sends subqueries q1 and q2 using the procedure outlined\nabove. More generally, if node A finds itself to be inside\nthe zone subtree that maximally covers Q, it will send the\nsubqueries that resulted from the split. Otherwise, if there\nis no overlap between A and Q, then A forwards Q as is (in\nthis case Q is either the original query, or a product of an\nearlier split).\nFigure 10 describes the pseudo-code for the zone splitting\nalgorithm. As shown in the above algorithm, once a\nsubquery has been recognized as belonging to the caller\"s zone,\nprocedure Resolve is invoked to resolve the subquery and\nsend a reply to the querier. Every query message contains\n4\nBy point queries, we mean the equality condition on all\nindexed keys. DIM index attributes are not necessarily\nprimary keys.\n69\nthe geographic location of its initiator, so the corresponding\nreply message can be delivered directly back to the\ninitiator. Finally, in the process of query resolution, zones might\nshrink similar to shrinkage during inserting. We omit this\nin the pseudo code.\n3.5 Robustness\nUntil now, we have not discussed the impact of node\nfailures and packet losses, or node arrivals and departures on\nour algorithms. Packet losses can affect query and event\ninsertion, and node failures can result in lost data, while node\narrivals and departures can impact the zone structure. We\nnow discuss how DIMs can be made robust to these kinds\nof dynamics.\n3.5.1 Maintaining Zones\nIn previous sections, we described how the zone discovery\nalgorithm could leave zone boundaries undecided. These\nundecided boundaries are resolved during insertion or\nquerying, using the zone shrinking procedure describe above.\nWhen a new node joins the network, the zone discovery\nmechanism (Section 3.2) will cause neighboring zones to\nappropriately adjust their zone boundaries. At this time, those\nzones can also transfer to the new node those events they\nstore but which should belong to the new node.\nBefore a node turns itself off (if this is indeed possible), it\nknows that its backup node (Section 3.1) will take over its\nzone, and will simply send all its events to its backup node.\nNode deletion may also cause zone expansion. In order to\nkeep the mapping between the binary zone tree\"s leaf nodes\nand zones, we allow zone expansion to only occur among\nsibling zones (Section 3.1). The rule is: if zone(A)\"s sibling\nzone becomes empty, then A can expand its own zone to\ninclude its sibling zone.\nNow, we turn our attention to node failures. Node failures\nare just like node deletions except that a failed node does\nnot have a chance to move its events to another node. But\nhow does a node decide if its sibling has failed? If the\nsibling is within radio range, the absence of GPSR beaconing\nmessages can detect this. Once it detects this, the node can\nexpand its zone. A different approach is needed for\ndetecting siblings who are not within radio range. These are the\ncases where two nodes own their zones after exchanging a\nshrink message; they do not periodically exchange messages\nthereafter to maintain this zone relationship. In this case,\nwe detect the failure in a data-driven fashion, with obvious\nefficiency benefits compared to periodic keepalives. Once a\nnode B has failed, an event or query message that previously\nshould have been owned by the failed node will now be\ndelivered to the node A that owns the empty zone left by node\nB. A can see this message because A stands right around\nthe empty area left by B and is guaranteed to be visited in a\nGPSR perimeter traversal. A will set itself to be the owner\nof the message, and any node which would have dropped this\nmessage due to a perimeter loop will redirect the message to\nA instead. If A\"s zone happens to be the sibling of B\"s zone,\nA can safely expand its own zone and notify its expanded\nzone to its neighbors via GPSR beaconing messages.\n3.5.2 Preventing Data Loss from Node Failure\nThe algorithms described above are robust in terms of\nzone formation, but node failure can erase data. To avoid\nthis, DIMs can employ two kinds of replication: local\nreplication to be resilient to random node failures, and mirror\nreplication for resilience to concurrent failure of\ngeographically contiguous nodes.\nMirror replication is conceptually easy. Suppose an event\nE has a zone code code(E). Then, the node that inserts\nE would store two copies of E; one at the zone denoted\nby code(E), and the other at the zone corresponding to the\none\"s complement of code(E). This technique essentially\ncreates a mirror DIM. A querier would need, in parallel, to\nquery both the original DIM and its mirror since there is no\nway of knowing if a collection of nodes has failed. Clearly,\nthe trade-off here is an approximate doubling of both\ninsertion and query costs.\nThere exists a far cheaper technique to ensure resilience\nto random node failures. Our local replication technique\nrests on the observation that, for each node A, there exists\na unique node which will take over its zone when A fails.\nThis node is defined as the node responsible for A\"s zone\"s\nbackup zone (see Section 3.1). The basic idea is that A\nreplicates each data item it has in this node. We call this\nnode A\"s local replica. Let A\"s local replica be B. Often\nB will be a radio neighbor of A and can be detected from\nGPSR beacons. Sometimes, however, this is not the case,\nand B will have to be explicitly discovered.\nWe use an explicit message for discovering the local replica.\nDiscovering the local replica is data-driven, and uses a\nmechanism similar to that of event insertion. Node A sends a\nmessage whose geographic destination is a random nearby\nlocation chosen by A. The location is close enough to A such\nthat GPSR will guarantee that the message will delivered\nback to A. In addition, the message has three fields, one for\nthe zone code of A, code(A), one for the owner owner(A) of\nzone(A) which is set to be empty, and one for the geographic\nlocation of owner(A). Then the packet will be delivered in\nGPSR perimeter mode. Each node that receives this\nmessage will compare its zone code and code(A) in the message,\nand if it is more eligible to be the owner of zone(A) than\nthe current owner(A) recorded in the message, it will\nupdate the field owner(A) and the corresponding geographic\nlocation. Once the packet comes back to A, it will know the\nlocation of its local replica and can start to send replicas.\nIn a dense sensor network, the local replica of a node\nis usually very near to the node, either its direct neighbor\nor 1-2 hops away, so the cost of sending replicas to local\nreplication will not dominate the network traffic. However,\na node\"s local replica itself may fail. There are two ways to\ndeal with this situation; periodic refreshes, or repeated\ndatadriven discovery of local replicas. The former has higher\noverhead, but more quickly discovers failed replicas.\n3.5.3 Robustness to Packet Loss\nFinally, the mechanisms for querying and event insertion\ncan be easily made resilient to packet loss. For event\ninsertion, a simple ACK scheme suffices.\nOf course, queries and responses can be lost as well. In\nthis case, there exists an efficient approach for error\nrecovery. This rests on the observation that the querier knows\nwhich zones fall within its query and should have responded\n(we assume that a node that has no data matching a query,\nbut whose zone falls within the query, responds with a\nnegative acknowledgment). After a conservative timeout, the\nquerier can re-issue the queries selectively to these zones.\nIf DIM cannot get any answers (positive or negative) from\n70\n<0.3-0.8, 0.6-0.9>\n<0.5-0.8, 0.6-0.9><0.3-0.5, 0.6-0.9>\n<0.3-0.5, 0.6-0.9>\n<0.3-0.5, 0.6-0.9>\n<0.3-0.5, 0.6-0.75> <0.3-0.5, 0.75-0.9>\n0\n0\n1\n1\n1\n1\nFigure 9: An example of range query splitting\nResolve-Range-Query(Q)\n1 Qsub \u2190 nil\n2 q0, Qsub \u2190 Split-Query(Q)\n3 if q0 = nil\n4 then c \u2190 Encode(Q)\n5 if Contain(c, code(A)) = true\n6 then go to step 12\n7 else Send-Message(c, q0)\n8 else Resolve(q0)\n9 if is Internal() = true\n10 then Absorb (q0)\n11 else Append q0 to Qsub\n12 if Qsub = nil\n13 then for each subquery q \u2208 Qsub\n14 do c \u2190 Encode(q)\n15 Send-Message(c, q)\nFigure 10: Query resolving algorithm\ncertain zones after repeated timeouts, it can at least return\nthe partial query results to the application together with the\ninformation about the zones from which data is missing.\n4. DIMS: AN ANALYSIS\nIn this section, we present a simple analytic performance\nevaluation of DIMs, and compare their performance against\nother possible approaches for implementing multi-dimensional\nrange queries in sensor networks. In the next section, we\nvalidate these analyses using detailed packet-level simulations.\nOur primary metrics for the performance of a DIM are:\nAverage Insertion Cost measures the average number of\nmessages required to insert an event into the network.\nAverage Query Delivery Cost measures the average\nnumber of messages required to route a query message to\nall the relevant nodes in the network.\nIt does not measure the number of messages required to\ntransmit responses to the querier; this latter number\ndepends upon the precise data distribution and is the same\nfor many of the schemes we compare DIMs against.\nIn DIMs, event insertion essentially uses geographic\nrouting. In a dense N-node network where the likelihood of\ntraversing perimeters is small, the average event insertion\ncost proportional to\n\u221a\nN [23].\nOn the other hand, the query delivery cost depends upon\nthe size of ranges specified in the query. Recall that our\nquery delivery mechanism is careful about splitting a query\ninto sub-queries, doing so only when the query nears the\nzone that covers the query range. Thus, when the querier is\nfar from the queried zone, there are two components to the\nquery delivery cost. The first, which is proportional to\n\u221a\nN,\nis the cost to deliver the query near the covering zone. If\nwithin this covering zone, there are M nodes, the message\ndelivery cost of splitting the query is proportional to M.\nThe average cost of query delivery depends upon the\ndistribution of query range sizes. Now, suppose that query sizes\nfollow some density function f(x), then the average cost of\nresolve a query can be approximated by\n\u00ca N\n1\nxf(x)dx. To\ngive some intuition for the performance of DIMs, we\nconsider four different forms for f(x): the uniform distribution\nwhere a query range encompassing the entire network is as\nlikely as a point query; a bounded uniform distribution where\nall sizes up to a bound B are equally likely; an algebraic\ndistribution in which most queries are small, but large queries\nare somewhat likely; and an exponential distribution where\nmost queries are small and large queries are unlikely. In all\nour analyses, we make the simplifying assumption that the\nsize of a query is proportional to the number of nodes that\ncan answer that query.\nFor the uniform distribution P(x) \u221d c for some constant c.\nIf each query size from 1 . . . N is equally likely, the average\nquery delivery cost of uniformly distributed queries is O(N).\nThus, for uniformly distributed queries, the performance of\nDIMs is comparable to that of flooding. However, for the\napplications we envision, where nodes within the network\nare trying to correlate events, the uniform distribution is\nhighly unrealistic.\nSomewhat more realistic is a situation where all query\nsizes are bounded by a constant B. In this case, the average\ncost for resolving such a query is approximately\n\u00ca B\n1\nxf(x)dx =\nO(B). Recall now that all queries have to pay an\napproximate cost of O(\n\u221a\nN) to deliver the query near the covering\nzone. Thus, if DIM limited queries to a size proportional to\u221a\nN, the average query cost would be O(\n\u221a\nN).\nThe algebraic distribution, where f(x) \u221d x\u2212k\n, for some\nconstant k between 1 and 2, has an average query resolution\ncost given by\n\u00ca N\n1\nxf(x)dx = O(N2\u2212k\n). In this case, if k >\n1.5, the average cost of query delivery is dominated by the\ncost to deliver the query to near the covering zone, given by\nO(\n\u221a\nN).\nFinally, for the exponential distribution, f(x) = ce\u2212cx\nfor\nsome constant c, and the average cost is just the mean of the\ncorresponding distribution, i.e., O(1) for large N.\nAsymptotically, then, the cost of the query for the exponential\ndistribution is dominated by the cost to deliver the query\nnear the covering zone (O(\n\u221a\nN)).\nThus, we see that if queries follow either the bounded\nuniform distribution, the algebraic distribution, or the\nexponential distribution, the query cost scales as the insertion\ncost (for appropriate choice of constants for the bounded\nuniform and the algebraic distributions).\nHow well does the performance of DIMs compare against\nalternative choices for implementing multi-dimensional queries?\nA simple alternative is called external storage [23], where all\nevents are stored centrally in a node outside the sensor\nnetwork. This scheme incurs an insertion cost of O(\n\u221a\nN), and\na zero query cost. However, as [23] points out, such systems\nmay be impractical in sensor networks since the access link\nto the external node becomes a hotspot.\nA second alternative implementation would store events\nat the node where they are generated. Queries are flooded\n71\nthroughout the network, and nodes that have matching data\nrespond. Examples of systems that can be used for this\n(although, to our knowledge, these systems do not implement\nmulti-dimensional range queries) are Directed Diffusion [15]\nand TinyDB [17]. The flooding scheme incurs a zero\ninsertion cost, but an O(N) query cost. It is easy to show that\nDIMs outperform flooding as long as the ratio of the number\nof insertions to the number of queries is less than\n\u221a\nN.\nA final alternative would be to use a geographic hash table\n(GHT [20]). In this approach, attribute values are assumed\nto be integers (this is actually quite a reasonable\nassumption since attribute values are often quantized), and events\nare hashed on some (say, the first) attribute. A range query\nis sub-divided into several sub-queries, one for each integer\nin the range of the first attribute. Each sub-query is then\nhashed to the appropriate location. The nodes that receive a\nsub-query only return events that match all other attribute\nranges. In this approach, which we call GHT-R (GHT\"s for\nrange queries) the insertion cost is O(\n\u221a\nN). Suppose that\nthe range of the first attribute contains r discrete values.\nThen the cost to deliver queries is O(r\n\u221a\nN). Thus,\nasymptotically, GHT-R\"s perform similarly to DIMs. In practice,\nhowever, the proportionality constants are significantly\ndifferent, and DIMs outperform GHT-Rs, as we shall show\nusing detailed simulations.\n5. DIMS: SIMULATION RESULTS\nOur analysis gives us some insight into the asymptotic\nbehavior of various approaches for multi-dimensional range\nqueries. In this section, we use simulation to compare DIMs\nagainst flooding and GHT-R; this comparison gives us a\nmore detailed understanding of these approaches for\nmoderate size networks, and gives us a nuanced view of the\nmechanistic differences between some of these approaches.\n5.1 Simulation Methodology\nWe use ns-2 for our simulations. Since DIMs are\nimplemented on top of GPSR, we first ported an earlier GPSR\nimplementation to the latest version of ns-2. We modified\nthe GPSR module to call our DIM implementation when\nit receives any data message in transit or when it is about\nto drop a message because that message traversed the entire\nperimeter. This allows a DIM to modify message zone codes\nin flight (Section 3), and determine the actual owner of an\nevent or query.\nIn addition, to this, we implemented in ns-2 most of the\nDIM mechanisms described in Section 3. Of those\nmechanisms, the only one we did not implement is mirror\nreplication. We have implemented selective query retransmission\nfor resiliency to packet loss, but have left the evaluation of\nthis mechanism to future work. Our DIM implementation\nin ns-2 is 2800 lines of code.\nFinally, we implemented GHT-R, our GHT-based\nmultidimensional range query mechanism in ns-2. This\nimplementation was relatively straightforward, given that we had\nported GPSR, and modified GPSR to detect the completion\nof perimeter mode traversals.\nUsing this implementation, we conducted a fairly\nextensive evaluation of DIM and two alternatives (flooding, and\nour GHT-R). For all our experiments, we use uniformly\nplaced sensor nodes with network sizes ranging from 50\nnodes to 300 nodes. Each node has a radio range of 40m.\nFor the results presented here, each node has on average 20\nnodes within its nominal radio range. We have conducted\nexperiments at other node densities; they are in agreement\nwith the results presented here.\nIn all our experiments, each node first generates 3 events5\non average (more precisely, for a topology of size N, we have\n3N events, and each node is equally likely to generate an\nevent). We have conducted experiments for three different\nevent value distributions. Our uniform event distribution\ngenerates 2-dimensional events and, for each dimension,\nevery attribute value is equally likely. Our normal event\ndistribution generates 2-dimensional events and, for each\ndimension, the attribute value is normally distributed with a\nmean corresponding to the mid-point of the attribute value\nrange. The normal event distribution represents a skewed\ndata set. Finally, our trace event distribution is a collection\nof 4-dimensional events obtained from a habitat monitoring\nnetwork. As we shall see, this represents a fairly skewed\ndata set.\nHaving generated events, for each simulation we\ngenerate queries such that, on average, each node generates 2\nqueries. The query sizes are determined using the four size\ndistributions we discussed in Section 4: uniform,\nboundeduniform, algebraic and exponential. Once a query size has\nbeen determined, the location of the query (i.e., the actual\nboundaries of the zone) are uniformly distributed. For our\nGHT-R experiments, the dynamic range of the attributes\nhad 100 discrete values, but we restricted the query range\nfor any one attribute to 50 discrete values to allow those\nsimulations to complete in reasonable time.\nFinally, using one set of simulations we evaluate the\nefficacy of local replication by turning off random fractions of\nnodes and measuring the fidelity of the returned results.\nThe primary metrics for our simulations are the average\nquery and insertion costs, as defined in Section 4.\n5.2 Results\nAlthough we have examined almost all the combinations\nof factors described above, we discuss only the most salient\nones here, for lack of space.\nFigure 11 plots the average insertion costs for DIM and\nGHT-R (for flooding, of course, the insertion costs are zero).\nDIM incurs less per event overhead in inserting events\n(regardless of the actual event distribution; Figure 11 shows the\ncost for uniformly distributed events). The reason for this is\ninteresting. In GHT-R, storing almost every event incurs a\nperimeter traversal, and storing some events require\ntraversing the outer perimeter of the network [20]. By contrast, in\nDIM, storing an event incurs a perimeter traversal only when\na node\"s boundaries are undecided. Furthermore, an\ninsertion or a query in a DIM can traverse the outer perimeter\n(Section 3.3), but less frequently than in GHTs.\nFigure 13 plots the average query cost for a bounded\nuniform query size distribution. For this graph (and the next)\nwe use a uniform event distribution, since the event\ndistribution does not affect the query delivery cost. For this\nsimulation, our bound was 1\n4\nth the size of the largest possible\n5\nOur metrics are chosen so that the exact number of events\nand queries is unimportant for our discussion. Of course,\nthe overall performance of the system will depend on the\nrelative frequency of events and queries, as we discuss in\nSection 4. Since we don\"t have realistic ratios for these, we\nfocus on the microscopic costs, rather than on the overall\nsystem costs.\n72\n0\n2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n50 100 150 200 250 300\nAverageCostperInsertion\nNetwork Size\nDIM\nGHT-R\nFigure 11: Average insertion cost for DIM and\nGHT.\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\n5 10 15 20 25 30\nFractionofrepliescomparedwithnon-failurecase\nFraction of failed nodes (%)\nNo Replication\nLocal Replication\nFigure 12: Local replication performance.\nquery (e.g., a query of the form 0 \u2212 0.5, 0 \u2212 0.5 . Even for\nthis generous query size, DIMs perform quite well (almost\na third the cost of flooding). Notice, however, that\nGHTRs incur high query cost since almost any query requires as\nmany subqueries as the width of the first attribute\"s range.\nFigure 14 plots the average query cost for the exponential\ndistribution (the average query size for this distribution was\nset to be 1\n16\nth the largest possible query). The superior\nscaling of DIMs is evident in these graphs. Clearly, this is\nthe regime in which one might expect DIMs to perform best,\nwhen most of the queries are small and large queries are\nrelatively rare. This is also the regime in which one would\nexpect to use multi-dimensional range queries: to perform\nrelatively tight correlations. As with the bounded uniform\ndistribution, GHT query cost is dominated by the cost of\nsending sub-queries; for DIMs, the query splitting strategy\nworks quite well in keep overall query delivery costs low.\nFigure 12 describes the efficacy of local replication. To\nobtain this figure, we conducted the following experiment.\nOn a 100-node network, we inserted a number of events\nuniformly distributed throughout the network, then issued\na query covering the entire network and recorded the\nanswers. Knowing the expected answers for this query, we\nthen successively removed a fraction f of nodes randomly,\nand re-issued the same query. The figure plots the fraction\nof expected responses actually received, with and without\nreplication. As the graph shows, local replication performs\nwell for random failures, returning almost 90% of the\nresponses when up to 30% of the nodes have failed\nsimultaneously 6\n.In the absence of local replication, of course, when\n6\nIn practice, the performance of local replication is likely to\n0\n100\n200\n300\n400\n500\n600\n700\n50 100 150 200 250 300\nAverageCostperQueryinBoundedUnifDistribution\nNetwork Size\nDIM\nflooding\nGHT-R\nFigure 13: Average query cost with a bounded\nuniform query distribution\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n50 100 150 200 250 300\nAverageCostperQueryinExponentialDistribution\nNetwork Size\nDIM\nflooding\nGHT-R\nFigure 14: Average query cost with an exponential\nquery distribution\n30% of the nodes fail, the response rate is only 70% as one\nwould expect.\nWe note that DIMs (as currently designed) are not\nperfect. When the data is highly skewed-as it was for our trace\ndata set from the habitat monitoring application where most\nof the event values fell into within 10% of the attribute\"s\nrange-a few DIM nodes will clearly become the bottleneck.\nThis is depicted in Figure 15, which shows that for DIMs,\nand GHT-Rs, the maximum number of transmissions at any\nnetwork node (the hotspots) is rather high. (For less skewed\ndata distributions, and reasonable query size distributions,\nthe hotspot curves for all three schemes are comparable.)\nThis is a standard problem that the database indices have\ndealt with by tree re-balancing. In our case, simpler\nsolutions might be possible (and we discuss this in Section 7).\nHowever, our use of the trace data demonstrates that\nDIMs work for events which have more than two dimensions.\nIncreasing the number of dimensions does not noticeably\ndegrade DIMs query cost (omitted for lack of space).\nAlso omitted are experiments examining the impact of\nseveral other factors, as they do not affect our conclusions\nin any way. As we expected, DIMs are comparable in\nperformance to flooding when all sizes of queries are equally\nlikely. For an algebraic distribution of query sizes, the\nrelative performance is close to that for the exponential\ndistribution. For normally distributed events, the insertion costs\nbe much better than this. Assuming a node and its replica\ndon\"t simultaneously fail often, a node will almost always\ndetect a replica failure and re-replicate, leading to near 100%\nresponse rates.\n73\n0\n2000\n4000\n6000\n8000\n10000\n12000\n50 100 150 200 250 300\nMaximumHotspotonTraceDataSet\nNetwork Size\nDIM\nflooding\nGHT-R\nFigure 15: Hotspot usage\nDIM\nZone\nManager\nQuery\nRouter\nQuery\nProcessor\nEvent\nManager\nEvent\nRouter\nGPSR interface(Event driven/Thread based)\nupdate\nuseuse\nupdate\nGPSR\nUpper interface(Event driven/Thread based)\nLower interface(Event driven/Thread based)\nGreedy\nForwarding\nPerimeter\nForwarding\nBeaconing\nNeighbor\nList\nManager\nupdate\nuse\nMoteNIC (MicaRadio) IP Socket (802.11b/Ethernet)\nFigure 16: Software architecture of DIM over GPSR\nare comparable to that for the uniform distribution.\nFinally, we note that in all our evaluations we have only\nused list queries (those that request all events matching the\nspecified range). We expect that for summary queries (those\nthat expect an aggregate over matching events), the overall\ncost of DIMs could be lower because the matching data are\nlikely to be found in one or a small number of zones. We\nleave an understanding of this to future work. Also left to\nfuture work is a detailed understanding of the impact of\nlocation error on DIM\"s mechanisms. Recent work [22] has\nexamined the impact of imprecise location information on\nother data-centric storage mechanisms such as GHTs, and\nfound that there exist relatively simple fixes to GPSR that\nameliorate the effects of location error.\n6. IMPLEMENTATION\nWe have implemented DIMs on a Linux platform suitable\nfor experimentation on PDAs and PC-104 class machines.\nTo implement DIMs, we had to develop and test an\nindependent implementation of GPSR. Our GPSR implementation\nis full-featured, while our DIM implementation has most of\nthe algorithms discussed in Section 3; some of the robustness\nextensions have only simpler variants implemented.\nThe software architecture of DIM/GPSR system is shown\nin Figure 16. The entire system (about 5000 lines of code)\nis event-driven and multi-threaded. The DIM subsystem\nconsists of six logical components: zone management, event\nmaintenance, event routing, query routing, query\nprocessing, and GPSR interactions. The GPSR system is\nimplemented as user-level daemon process. Applications are\nexecuted as clients. For the DIM subsystem, the GPSR module\n0\n2\n4\n6\n8\n10\n12\n14\n16\n0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0\nQuery size\nAverage#ofreceivedresponses\nperquery\nFigure 17: Number of events received for different\nquery sizes\n0\n2\n4\n6\n8\n10\n12\n14\n16\n0.25x0.25 0.50x0.50 0.75x0.75 1.0x1.0\nQuery sizeTotalnumberofmessages\nonlyforsendingthequery\nFigure 18: Query distribution cost\nprovides several extensions: it exports information about\nneighbors, and provides callbacks during packet forwarding\nand perimeter-mode termination.\nWe tested our implementation on a testbed consisting of 8\nPC-104 class machines. Each of these boxes runs Linux and\nuses a Mica mote (attached through a serial cable) for\ncommunication. These boxes are laid out in an office building\nwith a total spatial separation of over a hundred feet. We\nmanually measured the locations of these nodes relative to\nsome coordinate system and configured the nodes with their\nlocation. The network topology is approximately a chain.\nOn this testbed, we inserted queries and events from a\nsingle designated node. Our events have two attributes which\nspan all combinations of the four values [0, 0.25, 0.75, 1]\n(sixteen events in all). Our queries span four sizes, returning 1,\n4, 9 and 16 events respectively.\nFigure 17 plots the number of events received for different\nsized queries. It might appear that we received fewer events\nthan expected, but this graph doesn\"t count the events that\nwere already stored at the querier. With that adjustment,\nthe number of responses matches our expectation. Finally,\nFigure 18 shows the total number of messages required for\ndifferent query sizes on our testbed.\nWhile these experiments do not reveal as much about the\nperformance range of DIMs as our simulations do, they\nnevertheless serve as proof-of-concept for DIMs. Our next step\nin the implementation is to port DIMs to the Mica motes,\nand integrate them into the TinyDB [17] sensor database\nengine on motes.\n74\n7. CONCLUSIONS\nIn this paper, we have discussed the design and evaluation\nof a distributed data structure called DIM for efficiently\nresolving multi-dimensional range queries in sensor networks.\nOur design of DIMs relies upon a novel locality-preserving\nhash inspired by early work in database indexing, and is\nbuilt upon GPSR. We have a working prototype, both of\nGPSR and DIM, and plan to conduct larger scale\nexperiments in the future.\nThere are several interesting future directions that we\nintend to pursue. One is adaptation to skewed data\ndistributions, since these can cause storage and transmission\nhotspots. Unlike traditional database indices that re-balance\ntrees upon data insertion, in sensor networks it might be\nfeasible to re-structure the zones on a much larger timescale\nafter obtaining a rough global estimate of the data\ndistribution. Another direction is support for node heterogeneity\nin the zone construction process; nodes with larger storage\nspace assert larger-sized zones for themselves. A third is\nsupport for efficient resolution of existential queries-whether\nthere exists an event matching a multi-dimensional range.\nAcknowledgments\nThis work benefited greatly from discussions with Fang Bian,\nHui Zhang and other ENL lab members, as well as from\ncomments provided by the reviewers and our shepherd Feng\nZhao.\n8. REFERENCES\n[1] J. Aspnes and G. Shah. Skip Graphs. In Proceedings of\n14th Annual ACM-SIAM Symposium on Discrete\nAlgorithms (SODA), Baltimore, MD, January 2003.\n[2] J. L. Bentley. Multidimensional Binary Search Trees Used\nfor Associative Searching. Communicaions of the ACM,\n18(9):475-484, 1975.\n[3] P. Bonnet, J. E. Gerhke, and P. Seshadri. Towards Sensor\nDatabase Systems. In Proceedings of the Second\nInternational Conference on Mobile Data Management,\nHong Kong, January 2001.\n[4] I. Clarke, O. Sandberg, B. Wiley, and T. W. Hong. Freenet:\nA Distributed Anonymous Information Storage and\nRetrieval System. In Designing Privacy Enhancing\nTechnologies: International Workshop on Design Issues in\nAnonymity and Unobservability. Springer, New York, 2001.\n[5] D. Comer. The Ubiquitous B-tree. ACM Computing\nSurveys, 11(2):121-137, 1979.\n[6] R. A. Finkel and J. L. Bentley. Quad Trees: A Data\nStructure for Retrieval on Composite Keys. Acta\nInformatica, 4:1-9, 1974.\n[7] D. Ganesan, D. Estrin, and J. Heidemann. DIMENSIONS:\nWhy do we need a new Data Handling architecture for\nSensor Networks? In Proceedings of the First Workshop on\nHot Topics In Networks (HotNets-I), Princeton, NJ,\nOctober 2002.\n[8] A. Gionis, P. Indyk, and R. Motwani. Similarity Search in\nHigh Dimensions via Hashing. In Proceedings of the 25th\nVLDB conference, Edinburgh, Scotland, September 1999.\n[9] R. Govindan, J. Hellerstein, W. Hong, S. Madden,\nM. Franklin, and S. Shenker. The Sensor Network as a\nDatabase. Technical Report 02-771, Computer Science\nDepartment, University of Southern California, September\n2002.\n[10] B. Greenstein, D. Estrin, R. Govindan, S. Ratnasamy, and\nS. Shenker. DIFS: A Distributed Index for Features in\nSensor Networks. In Proceedings of 1st IEEE International\nWorkshop on Sensor Network Protocols and Applications,\nAnchorage, AK, May 2003.\n[11] A. Guttman. R-trees: A Dynamic Index Structure for\nSpatial Searching. In Proceedings of the ACM SIGMOD,\nBoston, MA, June 1984.\n[12] M. Harren, J. M. Hellerstein, R. Huebsch, B. T. Loo,\nS. Shenker, and I. Stoica. Complex Queries in DHT-based\nPeer-to-Peer Networks. In P. Druschel, F. Kaashoek, and\nA. Rowstron, editors, Proceedings of 1st International\nWorkshop on Peer-to-Peer Systems (IPTPS\"02), volume\n2429 of LNCS, page 242, Cambridge, MA, March 2002.\nSpringer-Verlag.\n[13] P. Indyk and R. Motwani. Approximate Nearest Neighbors:\nTowards Removing the Curse of Dimensionality. In\nProceedings of the 30th Annual ACM Symposium on\nTheory of Computing, Dallas, Texas, May 1998.\n[14] P. Indyk, R. Motwani, P. Raghavan, and S. Vempala.\nLocality-preserving Hashing in Multidimensional Spaces. In\nProceedings of the 29th Annual ACM symposium on\nTheory of Computing, pages 618 - 625, El Paso, Texas,\nMay 1997. ACM Press.\n[15] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed\nDiffusion: A Scalable and Robust Communication\nParadigm for Sensor Networks. In Proceedings of the Sixth\nAnnual ACM/IEEE International Conference on Mobile\nComputing and Networking (Mobicom 2000), Boston, MA,\nAugust 2000.\n[16] B. Karp and H. T. Kung. GPSR: Greedy Perimeter\nStateless Routing for Wireless Networks. In Proceedings of\nthe Sixth Annual ACM/IEEE International Conference on\nMobile Computing and Networking (Mobicom 2000),\nBoston, MA, August 2000.\n[17] S. Madden, M. Franklin, J. Hellerstein, and W. Hong. The\nDesign of an Acquisitional Query Processor for Sensor\nNetworks. In Proceedings of ACM SIGCMOD, San Diego,\nCA, June 2003.\n[18] S. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong.\nTAG: a Tiny AGregation Service for ad-hoc Sensor\nNetworks. In Proceedings of 5th Annual Symposium on\nOperating Systems Design and Implementation (OSDI),\nBoston, MA, December 2002.\n[19] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and\nS. Shenker. A Scalable Content-Addressable Network. In\nProceedings of the ACM SIGCOMM, San Diego, CA,\nAugust 2001.\n[20] S. Ratnasamy, B. Karp, L. Yin, F. Yu, D. Estrin,\nR. Govindan, and S. Shenker. GHT: A Geographic Hash\nTable for Data-Centric Storage. In Proceedings of the First\nACM International Workshop on Wireless Sensor\nNetworks and Applications, Atlanta, GA, September 2002.\n[21] H. Samet. Spatial Data Structures. In W. Kim, editor,\nModern Database Systems: The Object Model,\nInteroperability and Beyond, pages 361-385. Addison\nWesley/ACM, 1995.\n[22] K. Sead, A. Helmy, and R. Govindan. On the Effect of\nLocalization Errors on Geographic Face Routing in Sensor\nNetworks. In Under submission, 2003.\n[23] S. Shenker, S. Ratnasamy, B. Karp, R. Govindan, and\nD. Estrin. Data-Centric Storage in Sensornets. In Proc.\nACM SIGCOMM Workshop on Hot Topics In Networks,\nPrinceton, NJ, 2002.\n[24] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and\nH. Balakrishnan. Chord: A Scalable Peer-To-Peer Lookup\nService for Internet Applications. In Proceedings of the\nACM SIGCOMM, San Diego, CA, August 2001.\n[25] F. Ye, H. Luo, J. Cheng, S. Lu, and L. Zhang. A Two-Tier\nData Dissemination Model for Large-scale Wireless Sensor\nNetworks. In Proceedings of the Eighth Annual ACM/IEEE\nInternational Conference on Mobile Computing and\nNetworking (Mobicom\"02), Atlanta, GA, September 2002.\n75", "keywords": "datacentric storage system;multi-dimensional range query;multidimensional range query;event insertion;querying cost;query flooding;indexing technique;dim;distributed datum structure;asymptotic behavior;locality-preserving geographic hash;sensor network;geographic routing;centralized index;normal event distribution;efficient correlation;distributed index"} {"name": "train_C-49", "title": "Evaluating Opportunistic Routing Protocols with Large Realistic Contact Traces", "abstract": "Traditional mobile ad-hoc network (MANET) routing protocols assume that contemporaneous end-to-end communication paths exist between data senders and receivers. In some mobile ad-hoc networks with a sparse node population, an end-to-end communication path may break frequently or may not exist at any time. Many routing protocols have been proposed in the literature to address the problem, but few were evaluated in a realistic opportunistic network setting. We use simulation and contact traces (derived from logs in a production network) to evaluate and compare five existing protocols: direct-delivery, epidemic, random, PRoPHET, and Link-State, as well as our own proposed routing protocol. We show that the direct delivery and epidemic routing protocols suffer either low delivery ratio or high resource usage, and other protocols make tradeoffs between delivery ratio and resource usage.", "fulltext": "1. INTRODUCTION\nMobile opportunistic networks are one kind of delay-tolerant\nnetwork (DTN) [6]. Delay-tolerant networks provide service\ndespite long link delays or frequent link breaks. Long link delays\nhappen in networks with communication between nodes at a great\ndistance, such as interplanetary networks [2]. Link breaks are caused\nby nodes moving out of range, environmental changes, interference\nfrom other moving objects, radio power-offs, or failed nodes. For\nus, mobile opportunistic networks are those DTNs with sparse node\npopulation and frequent link breaks caused by power-offs and the\nmobility of the nodes.\nMobile opportunistic networks have received increasing interest\nfrom researchers. In the literature, these networks include mobile\nsensor networks [25], wild-animal tracking networks [11],\npocketswitched networks [8], and transportation networks [1, 14]. We\nexpect to see more opportunistic networks when the\none-laptopper-child (OLPC) project [18] starts rolling out inexpensive\nlaptops with wireless networking capability for children in developing\ncountries, where often no infrastructure exits. Opportunistic\nnetworking is one promising approach for those children to exchange\ninformation.\nOne fundamental problem in opportunistic networks is how to\nroute messages from their source to their destination. Mobile\nopportunistic networks differ from the Internet in that disconnections\nare the norm instead of the exception. In mobile opportunistic\nnetworks, communication devices can be carried by people [4],\nvehicles [1] or animals [11]. Some devices can form a small mobile\nad-hoc network when the nodes move close to each other. But a\nnode may frequently be isolated from other nodes. Note that\ntraditional Internet routing protocols and ad-hoc routing protocols, such\nas AODV [20] or DSDV [19], assume that a contemporaneous\nendto-end path exists, and thus fail in mobile opportunistic networks.\nIndeed, there may never exist an end-to-end path between two given\ndevices.\nIn this paper, we study protocols for routing messages between\nwireless networking devices carried by people. We assume that\npeople send messages to other people occasionally, using their\ndevices; when no direct link exists between the source and the\ndestination of the message, other nodes may relay the message to the\ndestination. Each device represents a unique person (it is out of the\nscope of this paper when a device maybe carried by multiple\npeople). Each message is destined for a specific person and thus for\na specific node carried by that person. Although one person may\ncarry multiple devices, we assume that the sender knows which\ndevice is the best to receive the message. We do not consider\nmulticast or geocast in this paper.\nMany routing protocols have been proposed in the literature.\nFew of them were evaluated in realistic network settings, or even in\nrealistic simulations, due to the lack of any realistic people\nmobility model. Random walk or random way-point mobility models are\noften used to evaluate the performance of those routing protocols.\nAlthough these synthetic mobility models have received extensive\ninterest by mobile ad-hoc network researchers [3], they do not\nreflect people\"s mobility patterns [9]. Realising the limitations of\nusing random mobility models in simulations, a few researchers have\nstudied routing protocols in mobile opportunistic networks with\nrealistic mobility traces. Chaintreau et al. [5] theoretically analyzed\nthe impact of routing algorithms over a model derived from a\nrealistic mobility data set. Su et al. [22] simulated a set of routing\n35\nprotocols in a small experimental network. Those studies help\nresearchers better understand the theoretical limits of opportunistic\nnetworks, and the routing protocol performance in a small network\n(20-30 nodes).\nDeploying and experimenting large-scale mobile opportunistic\nnetworks is difficult, we too resort to simulation. Instead of\nusing a complex mobility model to mimic people\"s mobility patterns,\nwe used mobility traces collected in a production wireless\nnetwork at Dartmouth College to drive our simulation. Our\nmessagegeneration model, however, was synthetic.\nTo the best of our knowledge, we are the first to simulate the\neffect of routing protocols in a large-scale mobile opportunistic\nnetwork, using realistic contact traces derived from real traces of\na production network with more than 5, 000 users.\nUsing realistic contact traces, we evaluate the performance of\nthree naive routing protocols (direct-delivery, epidemic, and\nrandom) and two prediction-based routing protocols, PRoPHET [16]\nand Link-State [22]. We also propose a new prediction-based\nrouting protocol, and compare it to the above in our evaluation.\n2. ROUTING PROTOCOL\nA routing protocol is designed for forwarding messages from one\nnode (source) to another node (destination). Any node may\ngenerate messages for any other node, and may carry messages destined\nfor other nodes. In this paper, we consider only messages that are\nunicast (single destination).\nDTN routing protocols could be described in part by their\ntransfer probability and replication probability; that is, when one node\nmeets another node, what is the probability that a message should\nbe transfered and if so, whether the sender should retain its copy.\nTwo extremes are the direct-delivery protocol and the epidemic\nprotocol. The former transfers with probability 1 when the node\nmeets the destination, 0 for others, and no replication. The latter\nuses transfer probability 1 for all nodes and unlimited replication.\nBoth these protocols have their advantages and disadvantages. All\nother protocols are between the two extremes.\nFirst, we define the notion of contact between two nodes. Then\nwe describe five existing protocols before presenting our own\nproposal.\nA contact is defined as a period of time during which two nodes\nhave the opportunity to communicate. Although we are aware that\nwireless technologies differ, we assume that a node can reliably\ndetect the beginning and end time of a contact with nearby nodes.\nA node may be in contact with several other nodes at the same time.\nThe contact history of a node is a sequence of contacts with other\nnodes. Node i has a contact history Hi(j), for each other node j,\nwhich denotes the historical contacts between node i and node j.\nWe record the start and end time for each contact; however, the last\ncontacts in the node\"s contact history may not have ended.\n2.1 Direct Delivery Protocol\nIn this simple protocol, a message is transmitted only when the\nsource node can directly communicate with the destination node\nof the message. In mobile opportunistic networks, however, the\nprobability for the sender to meet the destination may be low, or\neven zero.\n2.2 Epidemic Routing Protocol\nThe epidemic routing protocol [23] floods messages into the\nnetwork. The source node sends a copy of the message to every node\nthat it meets. The nodes that receive a copy of the message also\nsend a copy of the message to every node that they meet.\nEventually, a copy of the message arrives at the destination of the message.\nThis protocol is simple, but may use significant resources;\nexcessive communication may drain each node\"s battery quickly.\nMoreover, since each node keeps a copy of each message, storage is not\nused efficiently, and the capacity of the network is limited.\nAt a minimum, each node must expire messages after some amount\nof time or stop forwarding them after a certain number of hops.\nAfter a message expires, the message will not be transmitted and will\nbe deleted from the storage of any node that holds the message.\nAn optimization to reduce the communication cost is to\ntransfer index messages before transferring any data message. The\nindex messages contain IDs of messages that a node currently holds.\nThus, by examining the index messages, a node only transfers\nmessages that are not yet contained on the other nodes.\n2.3 Random Routing\nAn obvious approach between the above two extremes is to\nselect a transfer probability between 0 and 1 to forward messages at\neach contact. We use a simple replication strategy that allows only\nthe source node to make replicas, and limits the replication to a\nspecific number of copies. The message has some chance of\nbeing transferred to a highly mobile node, and thus may have a better\nchance to reach its destination before the message expires.\n2.4 PRoPHET Protocol\nPRoPHET [16] is a Probabilistic Routing Protocol using History\nof past Encounters and Transitivity to estimate each node\"s delivery\nprobability for each other node. When node i meets node j, the\ndelivery probability of node i for j is updated by\npij = (1 \u2212 pij)p0 + pij, (1)\nwhere p0 is an initial probability, a design parameter for a given\nnetwork. Lindgren et al. [16] chose 0.75, as did we in our\nevaluation. When node i does not meet j for some time, the delivery\nprobability decreases by\npij = \u03b1k\npij, (2)\nwhere \u03b1 is the aging factor (\u03b1 < 1), and k is the number of time\nunits since the last update.\nThe PRoPHET protocol exchanges index messages as well as\ndelivery probabilities. When node i receives node j\"s delivery\nprobabilities, node i may compute the transitive delivery probability\nthrough j to z with\npiz = piz + (1 \u2212 piz)pijpjz\u03b2, (3)\nwhere \u03b2 is a design parameter for the impact of transitivity; we\nused \u03b2 = 0.25 as did Lindgren [16].\n2.5 Link-State Protocol\nSu et al. [22] use a link-state approach to estimate the weight of\neach path from the source of a message to the destination. They\nuse the median inter-contact duration or exponentially aged\nintercontact duration as the weight on links. The exponentially aged\ninter-contact duration of node i and j is computed by\nwij = \u03b1wij + (1 \u2212 \u03b1)I, (4)\nwhere I is the new inter-contact duration and \u03b1 is the aging factor.\nNodes share their link-state weights when they can communicate\nwith each other, and messages are forwarded to the node that have\nthe path with the lowest link-state weight.\n36\n3. TIMELY-CONTACT PROBABILITY\nWe also use historical contact information to estimate the\nprobability of meeting other nodes in the future. But our method differs\nin that we estimate the contact probability within a period of time.\nFor example, what is the contact probability in the next hour?\nNeither PRoPHET nor Link-State considers time in this way.\nOne way to estimate the timely-contact probability is to use the\nratio of the total contact duration to the total time. However, this\napproach does not capture the frequency of contacts. For example,\none node may have a long contact with another node, followed by\na long non-contact period. A third node may have a short contact\nwith the first node, followed by a short non-contact period. Using\nthe above estimation approach, both examples would have similar\ncontact probability. In the second example, however, the two nodes\nhave more frequent contacts.\nWe design a method to capture the contact frequency of mobile\nnodes. For this purpose, we assume that even short contacts are\nsufficient to exchange messages.1\nThe probability for node i to meet node j is computed by the\nfollowing procedure. We divide the contact history Hi(j) into a\nsequence of n periods of \u0394T starting from the start time (t0) of the\nfirst contact in history Hi(j) to the current time. We number each\nof the n periods from 0 to n \u2212 1, then check each period. If node\ni had any contact with node j during a given period m, which is\n[t0 + m\u0394T, t0 + (m + 1)\u0394T), we set the contact status Im to be\n1; otherwise, the contact status Im is 0. The probability p\n(0)\nij that\nnode i meets node j in the next \u0394T can be estimated as the average\nof the contact status in prior intervals:\np\n(0)\nij =\n1\nn\nn\u22121X\nm=0\nIm. (5)\nTo adapt to the change of contact patterns, and reduce the storage\nspace for contact histories, a node may discard old history contacts;\nin this situation, the estimate would be based on only the retained\nhistory.\nThe above probability is the direct contact probability of two\nnodes. We are also interested in the probability that we may be\nable to pass a message through a sequence of k nodes. We define\nthe k-order probability inductively,\np\n(k)\nij = p\n(0)\nij +\nX\n\u03b1\np\n(0)\ni\u03b1 p\n(k\u22121)\n\u03b1j , (6)\nwhere \u03b1 is any node other than i or j.\n3.1 Our Routing Protocol\nWe first consider the case of a two-hop path, that is, with only\none relay node. We consider two approaches: either the receiving\nneighbor decides whether to act as a relay, or the source decides\nwhich neighbors to use as relay.\n3.1.1 Receiver Decision\nWhenever a node meets other nodes, they exchange all their\nmessages (or as above, index messages). If the destination of a\nmessage is the receiver itself, the message is delivered. Otherwise, if\nthe probability of delivering the message to its destination through\nthis receiver node within \u0394T is greater than or equal to a certain\nthreshold, the message is stored in the receiver\"s storage to forward\n1\nIn our simulation, however, we accurately model the\ncommunication costs and some short contacts will not succeed in transfer of\nall messages.\nto the destination. If the probability is less than the threshold, the\nreceiver discards the message. Notice that our protocol replicates\nthe message whenever a good-looking relay comes along.\n3.1.2 Sender Decision\nTo make decisions, a sender must have the information about its\nneighbors\" contact probability with a message\"s destination.\nTherefore, meta-data exchange is necessary.\nWhen two nodes meet, they exchange a meta-message,\ncontaining an unordered list of node IDs for which the sender of the\nmetamessage has a contact probability greater than the threshold.\nAfter receiving a meta-message, a node checks whether it has\nany message that destined to its neighbor, or to a node in the node\nlist of the neighbor\"s meta-message. If it has, it sends a copy of the\nmessage.\nWhen a node receives a message, if the destination of the\nmessage is the receiver itself, the message is delivered. Otherwise, the\nmessage is stored in the receiver\"s storage for forwarding to the\ndestination.\n3.1.3 Multi-node Relay\nWhen we use more than two hops to relay a message, each node\nneeds to know the contact probabilities along all possible paths to\nthe message destination.\nEvery node keeps a contact probability matrix, in which each cell\npij is a contact probability between to nodes i and j. Each node\ni computes its own contact probabilities (row i) with other nodes\nusing Equation (5) whenever the node ends a contact with other\nnodes. Each row of the contact probability matrix has a version\nnumber; the version number for row i is only increased when node i\nupdates the matrix entries in row i. Other matrix entries are updated\nthrough exchange with other nodes when they meet.\nWhen two nodes i and j meet, they first exchange their contact\nprobability matrices. Node i compares its own contact matrix with\nnode j\"s matrix. If node j\"s matrix has a row l with a higher version\nnumber, then node i replaces its own row l with node j\"s row l.\nLikewise node j updates its matrix. After the exchange, the two\nnodes will have identical contact probability matrices.\nNext, if a node has a message to forward, the node estimates\nits neighboring node\"s order-k contact probability to contact the\ndestination of the message using Equation (6). If p\n(k)\nij is above a\nthreshold, or if j is the destination of the message, node i will send\na copy of the message to node j.\nAll the above effort serves to determine the transfer probability\nwhen two nodes meet. The replication decision is orthogonal to\nthe transfer decision. In our implementation, we always replicate.\nAlthough PRoPHET [16] and Link-State [22] do no replication, as\ndescribed, we added replication to those protocols for better\ncomparison to our protocol.\n4. EVALUATION RESULTS\nWe evaluate and compare the results of direct delivery, epidemic,\nrandom, PRoPHET, Link-State, and timely-contact routing\nprotocols.\n4.1 Mobility traces\nWe use real mobility data collected at Dartmouth College.\nDartmouth College has collected association and disassociation\nmessages from devices on its wireless network wireless users since\nspring 2001 [13]. Each message records the wireless card MAC\naddress, the time of association/disassociation, and the name of the\naccess point. We treat each unique MAC address as a node. For\n37\nmore information about Dartmouth\"s network and the data\ncollection, see previous studies [7, 12].\nOur data are not contacts in a mobile ad-hoc network. We can\napproximate contact traces by assuming that two users can\ncommunicate with each other whenever they are associated with the same\naccess point. Chaintreau et al. [5] used Dartmouth data traces and\nmade the same assumption to theoretically analyze the impact of\nhuman mobility on opportunistic forwarding algorithms. This\nassumption may not be accurate,2\nbut it is a good first approximation.\nIn our simulation, we imagine the same clients and same mobility\nin a network with no access points. Since our campus has full WiFi\ncoverage, we assume that the location of access points had little\nimpact on users\" mobility.\nWe simulated one full month of trace data (November 2003)\ntaken from CRAWDAD [13], with 5, 142 users. Although\npredictionbased protocols require prior contact history to estimate each node\"s\ndelivery probability, our preliminary results show that the\nperformance improvement of warming-up over one month of trace was\nmarginal. Therefore, for simplicity, we show the results of all\nprotocols without warming-up.\n4.2 Simulator\nWe developed a custom simulator.3\nSince we used contact traces\nderived from real mobility data, we did not need a mobility model\nand omitted physical and link-layer details for node discovery. We\nwere aware that the time for neighbor discovery in different\nwireless technologies vary from less than one seconds to several\nseconds. Furthermore, connection establishment also takes time, such\nas DHCP. In our simulation, we assumed the nodes could discover\nand connect each other instantly when they were associated with a\nsame AP. To accurately model communication costs, however, we\nsimulated some MAC-layer behaviors, such as collision.\nThe default settings of the network of our simulator are listed in\nTable 1, using the values recommended by other papers [22, 16].\nThe message probability was the probability of generating\nmessages, as described in Section 4.3. The default transmission\nbandwidth was 11 Mb/s. When one node tried to transmit a message, it\nfirst checked whether any nearby node was transmitting. If it was,\nthe node backed off a random number of slots. Each slot was 1\nmillisecond, and the maximum number of backoff slots was 30. The\nsize of messages was uniformly distributed between 80 bytes and\n1024 bytes. The hop count limit (HCL) was the maximum number\nof hops before a message should stop forwarding. The time to live\n(TTL) was the maximum duration that a message may exist before\nexpiring. The storage capacity was the maximum space that a node\ncan use for storing messages. For our routing method, we used a\ndefault prediction window \u0394T of 10 hours and a probability\nthreshold of 0.01. The replication factor r was not limited by default, so\nthe source of a message transferred the messages to any other node\nthat had a contact probability with the message destination higher\nthan the probability threshold.\n4.3 Message generation\nAfter each contact event in the contact trace, we generated a\nmessage with a given probability; we choose a source node and a\ndes2\nTwo nodes may not have been able to directly communicate while\nthey were at two far sides of an access point, or two nodes may\nhave been able to directly communicate if they were between two\nadjacent access points.\n3\nWe tried to use a general network simulator (ns2), which was\nextremely slow when simulating a large number of mobile nodes (in\nour case, more than 5000 nodes), and provided unnecessary detail\nin modeling lower-level network protocols.\nTable 1: Default Settings of the Simulation\nParameter Default value\nmessage probability 0.001\nbandwidth 11 Mb/s\ntransmission slot 1 millisecond\nmax backoff slots 30\nmessage size 80-1024 bytes\nhop count limit (HCL) unlimited\ntime to live (TTL) unlimited\nstorage capacity unlimited\nprediction window \u0394T 10 hours\nprobability threshold 0.01\ncontact history length 20\nreplication always\naging factor \u03b1 0.9 (0.98 PRoPHET)\ninitial probability p0 0.75 (PRoPHET)\ntransitivity impact \u03b2 0.25 (PRoPHET)\n0\n20000\n40000\n60000\n80000\n100000\n120000\n0 5 10 15 20\nNumberofoccurrence\nhour\nmovements\ncontacts\nFigure 1: Movements and contacts duration each hour\ntination node randomly using a uniform distribution across nodes\nseen in the contact trace up to the current time. When there were\nmore contacts during a certain period, there was a higher likelihood\nthat a new message was generated in that period. This correlation\nis not unreasonable, since there were more movements during the\nday than during the night, and so the number of contacts. Figure 1\nshows the statistics of the numbers of movements and the numbers\nof contacts during each hour of the day, summed across all users\nand all days. The plot shows a clear diurnal activity pattern. The\nactivities reached lowest around 5am and peaked between 4pm and\n5pm. We assume that in some applications, network traffic exhibits\nsimilar patterns, that is, people send more messages during the day,\ntoo.\nMessages expire after a TTL. We did not use proactive methods\nto notify nodes the delivery of messages, so that the messages can\nbe removed from storage.\n4.4 Metrics\nWe define a set of metrics that we use in evaluating routing\nprotocols in opportunistic networks:\n\u2022 delivery ratio, the ratio of the number of messages delivered\nto the number of total messages generated.\n\u2022 message transmissions, the total number of messages\ntransmitted during the simulation across all nodes.\n38\n\u2022 meta-data transmissions, the total number of meta-data units\ntransmitted during the simulation across all nodes.\n\u2022 message duplications, the number of times a message copy\noccurred, due to replication.\n\u2022 delay, the duration between a message\"s generation time and\nthe message\"s delivery time.\n\u2022 storage usage, the max and mean of maximum storage (bytes)\nused across all nodes.\n4.5 Results\nHere we compare simulation results of the six routing protocols.\n0.001\n0.01\n0.1\n1\nunlimited 100 24 10 1\nDeliveryratio\nMessage time-to-live (TTL) (hour)\ndirect\nrandom\nprediction\nstate\nprophet\nepidemic\nFigure 2: Delivery ratio (log scale). The direct and random\nprotocols for one-hour TTL had delivery ratios that were too\nlow to be visible in the plot.\nFigure 2 shows the delivery ratio of all the protocols, with\ndifferent TTLs. (In all the plots in the paper, prediction stands for our\nmethod, state stands for the Link-State protocol, and prophet\nrepresents PRoPHET.) Although we had 5,142 users in the\nnetwork, the direct-delivery and random protocols had low delivery\nratios (note the log scale). Even for messages with an unlimited\nlifetime, only 59 out of 2077 messages were delivered during this\none-month simulation. The delivery ratio of epidemic routing was\nthe best. The three prediction-based approaches had low delivery\nratio, compared to epidemic routing. Although our method was\nslightly better than the other two, the advantage was marginal.\nThe high delivery ratio of epidemic routing came with a price:\nexcessive transmissions. Figure 3 shows the number of message\ndata transmissions. The number of message transmissions of\nepidemic routing was more than 10 times higher than for the\npredictionbased routing protocols. Obviously, the direct delivery protocol\nhad the lowest number of message transmissions - the number of\nmessage delivered. Among the three prediction-based methods,\nthe PRoPHET transmitted fewer messages, but had comparable\ndelivery-ratio as seen in Figure 2.\nFigure 4 shows that epidemic and all prediction-based methods\nhad substantial meta-data transmissions, though epidemic routing\nhad relatively more, with shorter TTLs. Because epidemic\nprotocol transmitted messages at every contact, in turn, more nodes had\nmessages that required meta-data transmission during contact. The\ndirect-delivery and random protocols had no meta-data\ntransmissions.\nIn addition to its message transmissions and meta-data\ntransmissions, the epidemic routing protocol also had excessive message\n1\n10\n100\n1000\n10000\n100000\n1e+06\n1e+07\n1e+08\nunlimited 100 24 10 1\nNumberofmessagetransmitted\nMessage time-to-live (TTL) (hour)\ndirect\nrandom\nprediction\nstate\nprophet\nepidemic\nFigure 3: Message transmissions (log scale)\n1\n10\n100\n1000\n10000\n100000\n1e+06\n1e+07\n1e+08\nunlimited 100 24 10 1\nNumberofmeta-datatransmissions\nMessage time-to-live (TTL) (hour)\ndirect\nrandom\nprediction\nstate\nprophet\nepidemic\nFigure 4: Meta-data transmissions (log scale). Direct and\nrandom protocols had no meta-data transmissions.\nduplications, spreading replicas of messages over the network.\nFigure 5 shows that epidemic routing had one or two orders more\nduplication than the prediction-based protocols. Recall that the\ndirectdelivery and random protocols did not replicate, thus had no data\nduplications.\nFigure 6 shows both the median and mean delivery delays. All\nprotocols show similar delivery delays in both mean and median\nmeasures for medium TTLs, but differ for long and short TTLs.\nWith a 100-hour TTL, or unlimited TTL, epidemic routing had the\nshortest delays. The direct-delivery had the longest delay for\nunlimited TTL, but it had the shortest delay for the one-hour TTL.\nThe results seem contrary to our intuition: the epidemic routing\nprotocol should be the fastest routing protocol since it spreads\nmessages all over the network. Indeed, the figures show only the delay\ntime for delivered messages. For direct delivery, random, and the\nprobability-based routing protocols, relatively few messages were\ndelivered for short TTLs, so many messages expired before they\ncould reach their destination; those messages had infinite delivery\ndelay and were not included in the median or mean measurements.\nFor longer TTLs, more messages were delivered even for the\ndirectdelivery protocol. The statistics of longer TTLs for comparison are\nmore meaningful than those of short TTLs.\nSince our message generation rate was low, the storage usage\nwas also low in our simulation. Figure 7 shows the maximum\nand average of maximum volume (in KBytes) of messages stored\n39\n1\n10\n100\n1000\n10000\n100000\n1e+06\n1e+07\n1e+08\nunlimited 100 24 10 1\nNumberofmessageduplications\nMessage time-to-live (TTL) (hour)\ndirect\nrandom\nprediction\nstate\nprophet\nepidemic\nFigure 5: Message duplications (log scale). Direct and random\nprotocols had no message duplications.\n1\n10\n100\n1000\n10000\nunlimited100 24 10 1 unlimited100 24 10 1\nDelay(minute)\nMessage time-to-live (TTL) (hour)\ndirect\nrandom\nprediction\nstate\nprophet\nepidemic\nMean delayMedian delay\nFigure 6: Median and mean delays (log scale).\nin each node. The epidemic routing had the most storage usage.\nThe message time-to-live parameter was the big factor affecting the\nstorage usage for epidemic and prediction-based routing protocols.\nWe studied the impact of different parameters of our\npredictionbased routing protocol. Our prediction-based protocol was\nsensitive to several parameters, such as the probability threshold and the\nprediction window \u0394T. Figure 8 shows the delivery ratios when\nwe used different probability thresholds. (The leftmost value 0.01\nis the value used for the other plots.) A higher probability threshold\nlimited the transfer probability, so fewer messages were delivered.\nIt also required fewer transmissions as shown in Figure 9. With\na larger prediction window, we got a higher contact probability.\nThus, for the same probability threshold, we had a slightly higher\ndelivery ratio as shown in Figure 10, and a few more transmissions\nas shown in Figure 11.\n5. RELATED WORK\nIn addition to the protocols that we evaluated in our simulation,\nseveral other opportunistic network routing protocols have been\nproposed in the literature. We did not implement and evaluate these\nrouting protocols, because either they require domain-specific\ninformation (location information) [14, 15], assume certain mobility\npatterns [17], present orthogonal approaches [10, 24] to other\nrouting protocols.\n0.1\n1\n10\n100\n1000\n10000\nunlimited100 24 10 1 unlimited100 24 10 1\nStorageusage(KB)\nMessage time-to-live (TTL) (hour)\ndirect\nrandom\nprediction\nstate\nprophet\nepidemic\nMean of maximumMax of maximum\nFigure 7: Max and mean of maximum storage usage across all\nnodes (log scale).\n0\n0.2\n0.4\n0.6\n0.8\n1\n0 0.2 0.4 0.6 0.8 1\nDeliveryratio\nProbability threshold\nFigure 8: Probability threshold impact on delivery ratio of\ntimely-contact routing.\nLeBrun et al. [14] propose a location-based delay-tolerant\nnetwork routing protocol. Their algorithm assumes that every node\nknows its own position, and the destination is stationary at a known\nlocation. A node forwards data to a neighbor only if the\nneighbor is closer to the destination than its own position. Our protocol\ndoes not require knowledge of the nodes\" locations, and learns their\ncontact patterns.\nLeguay et al. [15] use a high-dimensional space to represent a\nmobility pattern, then routes messages to nodes that are closer to\nthe destination node in the mobility pattern space. Location\ninformation of nodes is required to construct mobility patterns.\nMusolesi et al. [17] propose an adaptive routing protocol for\nintermittently connected mobile ad-hoc networks. They use a Kalman\nfilter to compute the probability that a node delivers messages. This\nprotocol assumes group mobility and cloud connectivity, that is,\nnodes move as a group, and among this group of nodes a\ncontemporaneous end-to-end connection exists for every pair of nodes. When\ntwo nodes are in the same connected cloud, DSDV [19] routing is\nused.\nNetwork coding also draws much interest from DTN research.\nErasure-coding [10, 24] explores coding algorithms to reduce\nmessage replicas. The source node replicates a message m times, then\nuses a coding scheme to encode them in one big message.\nAfter replicas are encoded, the source divides the big message into k\n40\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n0 0.2 0.4 0.6 0.8 1\nNumberofmessagetransmitted(million)\nProbability threshold\nFigure 9: Probability threshold impact on message\ntransmission of timely-contact routing.\n0\n0.2\n0.4\n0.6\n0.8\n1\n0.01 0.1 1 10 100\nDeliveryratio\nPrediction window (hour)\nFigure 10: Prediction window impact on delivery ratio of\ntimely-contact routing (semi-log scale).\nblocks of the same size, and transmits a block to each of the first k\nencountered nodes. If m of the blocks are received at the\ndestination, the message can be restored, where m < k. In a uniformly\ndistributed mobility scenario, the delivery probability increases\nbecause the probability that the destination node meets m relays is\ngreater than it meets k relays, given m < k.\n6. SUMMARY\nWe propose a prediction-based routing protocol for\nopportunistic networks. We evaluate the performance of our protocol using\nrealistic contact traces, and compare to five existing routing\nprotocols.\nOur simulation results show that direct delivery had the\nlowest delivery ratio, the fewest data transmissions, and no meta-data\ntransmission or data duplication. Direct delivery is suitable for\ndevices that require an extremely low power consumption. The\nrandom protocol increased the chance of delivery for messages\notherwise stuck at some low mobility nodes. Epidemic routing delivered\nthe most messages. The excessive transmissions, and data\nduplication, however, consume more resources than portable devices may\nbe able to provide.\nNone of these protocols (direct-delivery, random and epidemic\nrouting) are practical for real deployment of opportunistic networks,\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n0.01 0.1 1 10 100\nNumberofmessagetransmitted(million)\nPrediction window (hour)\nFigure 11: Prediction window impact on message transmission\nof timely-contact routing (semi-log scale).\nbecause they either had an extremely low delivery ratio, or had an\nextremely high resource consumption. The prediction-based\nrouting protocols had a delivery ratio more than 10 times better than\nthat for direct-delivery and random routing, and fewer\ntransmissions and less storage usage than epidemic routing. They also had\nfewer data duplications than epidemic routing.\nAll the prediction-based routing protocols that we have\nevaluated had similar performance. Our method had a slightly higher\ndelivery ratio, but more transmissions and higher storage usage.\nThere are many parameters for prediction-based routing protocols,\nhowever, and different parameters may produce different results.\nIndeed, there is an opportunity for some adaptation; for example,\nhigh priority messages may be given higher transfer and\nreplication probabilities to increase the chance of delivery and reduce the\ndelay, or a node with infrequent contact may choose to raise its\ntransfer probability.\nWe only studied the impact of predicting peer-to-peer contact\nprobability for routing in unicast messages. In some applications,\ncontext information (such as location) may be available for the\npeers. One may also consider other messaging models, for\nexample, where messages are sent to a location, such that every node at\nthat location will receive a copy of the message. Location\nprediction [21] may be used to predict nodes\" mobility, and to choose as\nrelays those nodes moving toward the destined location.\nResearch on routing in opportunistic networks is still in its early\nstage. Many other issues of opportunistic networks, such as\nsecurity and privacy, are mainly left open. We anticipate studying these\nissues in future work.\n7. ACKNOWLEDGEMENT\nThis research is a project of the Center for Mobile\nComputing and the Institute for Security Technology Studies at Dartmouth\nCollege. It was supported by DoCoMo Labs USA, the\nCRAWDAD archive at Dartmouth College (funded by NSF CRI Award\n0454062), NSF Infrastructure Award EIA-9802068, and by Grant\nnumber 2005-DD-BX-1091 awarded by the Bureau of Justice\nAssistance. Points of view or opinions in this document are those of\nthe authors and do not represent the official position or policies of\nany sponsor.\n8. REFERENCES\n[1] John Burgess, Brian Gallagher, David Jensen, and Brian Neil\nLevine. MaxProp: routing for vehicle-based\n41\ndisruption-tolerant networks. In Proceedings of the 25th\nIEEE International Conference on Computer\nCommunications (INFOCOM), April 2006.\n[2] Scott Burleigh, Adrian Hooke, Leigh Torgerson, Kevin Fall,\nVint Cerf, Bob Durst, Keith Scott, and Howard Weiss.\nDelay-tolerant networking: An approach to interplanetary\nInternet. IEEE Communications Magazine, 41(6):128-136,\nJune 2003.\n[3] Tracy Camp, Jeff Boleng, and Vanessa Davies. A survey of\nmobility models for ad-hoc network research. Wireless\nCommunication & Mobile Computing (WCMC): Special\nissue on Mobile ad-hoc Networking: Research, Trends and\nApplications, 2(5):483-502, 2002.\n[4] Andrew Campbell, Shane Eisenman, Nicholas Lane,\nEmiliano Miluzzo, and Ronald Peterson. People-centric\nurban sensing. In IEEE Wireless Internet Conference, August\n2006.\n[5] Augustin Chaintreau, Pan Hui, Jon Crowcroft, Christophe\nDiot, Richard Gass, and James Scott. Impact of human\nmobility on the design of opportunistic forwarding\nalgorithms. In Proceedings of the 25th IEEE International\nConference on Computer Communications (INFOCOM),\nApril 2006.\n[6] Kevin Fall. A delay-tolerant network architecture for\nchallenged internets. In Proceedings of the 2003 Conference\non Applications, Technologies, Architectures, and Protocols\nfor Computer Communications (SIGCOMM), August 2003.\n[7] Tristan Henderson, David Kotz, and Ilya Abyzov. The\nchanging usage of a mature campus-wide wireless network.\nIn Proceedings of the 10th Annual International Conference\non Mobile Computing and Networking (MobiCom), pages\n187-201, September 2004.\n[8] Pan Hui, Augustin Chaintreau, James Scott, Richard Gass,\nJon Crowcroft, and Christophe Diot. Pocket switched\nnetworks and human mobility in conference environments.\nIn ACM SIGCOMM Workshop on Delay Tolerant\nNetworking, pages 244-251, August 2005.\n[9] Ravi Jain, Dan Lelescu, and Mahadevan Balakrishnan.\nModel T: an empirical model for user registration patterns in\na campus wireless LAN. In Proceedings of the 11th Annual\nInternational Conference on Mobile Computing and\nNetworking (MobiCom), pages 170-184, 2005.\n[10] Sushant Jain, Mike Demmer, Rabin Patra, and Kevin Fall.\nUsing redundancy to cope with failures in a delay tolerant\nnetwork. In Proceedings of the 2005 Conference on\nApplications, Technologies, Architectures, and Protocols for\nComputer Communications (SIGCOMM), pages 109-120,\nAugust 2005.\n[11] Philo Juang, Hidekazu Oki, Yong Wang, Margaret\nMartonosi, Li-Shiuan Peh, and Daniel Rubenstein.\nEnergy-efficient computing for wildlife tracking: Design\ntradeoffs and early experiences with ZebraNet. In the Tenth\nInternational Conference on Architectural Support for\nProgramming Languages and Operating Systems, October\n2002.\n[12] David Kotz and Kobby Essien. Analysis of a campus-wide\nwireless network. Wireless Networks, 11:115-133, 2005.\n[13] David Kotz, Tristan Henderson, and Ilya Abyzov.\nCRAWDAD data set dartmouth/campus.\nhttp://crawdad.cs.dartmouth.edu/dartmouth/campus,\nDecember 2004.\n[14] Jason LeBrun, Chen-Nee Chuah, Dipak Ghosal, and Michael\nZhang. Knowledge-based opportunistic forwarding in\nvehicular wireless ad-hoc networks. In IEEE Vehicular\nTechnology Conference, pages 2289-2293, May 2005.\n[15] Jeremie Leguay, Timur Friedman, and Vania Conan.\nEvaluating mobility pattern space routing for DTNs. In\nProceedings of the 25th IEEE International Conference on\nComputer Communications (INFOCOM), April 2006.\n[16] Anders Lindgren, Avri Doria, and Olov Schelen.\nProbabilistic routing in intermittently connected networks. In\nWorkshop on Service Assurance with Partial and Intermittent\nResources (SAPIR), pages 239-254, 2004.\n[17] Mirco Musolesi, Stephen Hailes, and Cecilia Mascolo.\nAdaptive routing for intermittently connected mobile ad-hoc\nnetworks. In IEEE International Symposium on a World of\nWireless Mobile and Multimedia Networks, pages 183-189,\nJune 2005. extended version.\n[18] OLPC. One laptop per child project. http://laptop.org.\n[19] C. E. Perkins and P. Bhagwat. Highly dynamic\ndestination-sequenced distance-vector routing (DSDV) for\nmobile computers. Computer Communication Review, pages\n234-244, October 1994.\n[20] C. E. Perkins and E. M. Royer. ad-hoc on-demand distance\nvector routing. In IEEE Workshop on Mobile Computing\nSystems and Applications, pages 90-100, February 1999.\n[21] Libo Song, David Kotz, Ravi Jain, and Xiaoning He.\nEvaluating next-cell predictors with extensive Wi-Fi mobility\ndata. IEEE Transactions on Mobile Computing,\n5(12):1633-1649, December 2006.\n[22] Jing Su, Ashvin Goel, and Eyal de Lara. An empirical\nevaluation of the student-net delay tolerant network. In\nInternational Conference on Mobile and Ubiquitous Systems\n(MobiQuitous), July 2006.\n[23] Amin Vahdat and David Becker. Epidemic routing for\npartially-connected ad-hoc networks. Technical Report\nCS-2000-06, Duke University, July 2000.\n[24] Yong Wang, Sushant Jain, Margaret Martonosia, and Kevin\nFall. Erasure-coding based routing for opportunistic\nnetworks. In ACM SIGCOMM Workshop on Delay Tolerant\nNetworking, pages 229-236, August 2005.\n[25] Yu Wang and Hongyi Wu. DFT-MSN: the delay fault tolerant\nmobile sensor network for pervasive information gathering.\nIn Proceedings of the 25th IEEE International Conference on\nComputer Communications (INFOCOM), April 2006.\n42", "keywords": "contact trace;opportunistic network;route;epidemic protocol;frequent link break;end-to-end path;prophet;transfer probability;replication strategy;mobile opportunistic network;delay-tolerant network;random mobility model;direct-delivery protocol;simulation;realistic mobility trace;past encounter and transitivity history;history of past encounter and transitivity;unicast;routing protocol"} {"name": "train_C-50", "title": "CenWits: A Sensor-Based Loosely Coupled Search and Rescue System Using Witnesses", "abstract": "This paper describes the design, implementation and evaluation of a search and rescue system called CenWits. CenWits uses several small, commonly-available RF-based sensors, and a small number of storage and processing devices. It is designed for search and rescue of people in emergency situations in wilderness areas. A key feature of CenWits is that it does not require a continuously connected sensor network for its operation. It is designed for an intermittently connected network that provides only occasional connectivity. It makes a judicious use of the combined storage capability of sensors to filter, organize and store important information, combined battery power of sensors to ensure that the system remains operational for longer time periods, and intermittent network connectivity to propagate information to a processing center. A prototype of CenWits has been implemented using Berkeley Mica2 motes. The paper describes this implementation and reports on the performance measured from it.", "fulltext": "1. INTRODUCTION\nSearch and rescue of people in emergency situation in a\ntimely manner is an extremely important service. It has\nbeen difficult to provide such a service due to lack of timely\ninformation needed to determine the current location of a\nperson who may be in an emergency situation. With the\nemergence of pervasive computing, several systems [12, 19,\n1, 5, 6, 4, 11] have been developed over the last few years\nthat make use of small devices such as cell phones,\nsensors, etc. All these systems require a connected network\nvia satellites, GSM base stations, or mobile devices. This\nrequirement severely limits their applicability, particularly\nin remote wilderness areas where maintaining a connected\nnetwork is very difficult.\nFor example, a GSM transmitter has to be in the range\nof a base station to transmit. As a result, it cannot operate\nin most wilderness areas. While a satellite transmitter is\nthe only viable solution in wilderness areas, it is typically\nexpensive and cumbersome. Furthermore, a line of sight is\nrequired to transmit to satellite, and that makes it\ninfeasible to stay connected in narrow canyons, large cities with\nskyscrapers, rain forests, or even when there is a roof or some\nother obstruction above the transmitter, e.g. in a car. An\nRF transmitter has a relatively smaller range of\ntransmission. So, while an in-situ sensor is cheap as a single unit, it is\nexpensive to build a large network that can provide\nconnectivity over a large wilderness area. In a mobile environment\nwhere sensors are carried by moving people, power-efficient\nrouting is difficult to implement and maintain over a large\nwilderness area. In fact, building an adhoc sensor network\nusing only the sensors worn by hikers is nearly impossible\ndue to a relatively small number of sensors spread over a\nlarge wilderness area.\nIn this paper, we describe the design, implementation\nand evaluation of a search and rescue system called\nCenWits (Connection-less Sensor-Based Tracking System\nUsing Witnesses). CenWits is comprised of mobile, in-situ\nsensors that are worn by subjects (people, wild animals, or\nin-animate objects), access points (AP) that collect\ninformation from these sensors, and GPS receivers and location\npoints (LP) that provide location information to the\nsensors. A subject uses GPS receivers (when it can connect to\na satellite) and LPs to determine its current location. The\nkey idea of CenWits is that it uses a concept of witnesses\nto convey a subject\"s movement and location information\nto the outside world. This averts a need for maintaining a\nconnected network to transmit location information to the\noutside world. In particular, there is no need for\nexpensive GSM or satellite transmitters, or maintaining an adhoc\nnetwork of in-situ sensors in CenWits.\n180\nCenWits employs several important mechanisms to\naddress the key problem of resource constraints (low signal\nstrength, low power and limited memory) in sensors. In\nparticular, it makes a judicious use of the combined\nstorage capability of sensors to filter, organize and store\nimportant information, combined battery power of sensors to\nensure that the system remains operational for longer time\nperiods, and intermittent network connectivity to propagate\ninformation to a processing center.\nThe problem of low signal strengths (short range RF\ncommunication) is addressed by avoiding a need for maintaining\na connected network. Instead, CenWits propagates the\nlocation information of sensors using the concept of witnesses\nthrough an intermittently connected network. As a result,\nthis system can be deployed in remote wilderness areas, as\nwell as in large urban areas with skyscrapers and other tall\nstructures. Also, this makes CenWits cost-effective. A\nsubject only needs to wear light-weight and low-cost sensors\nthat have GPS receivers but no expensive GSM or satellite\ntransmitters. Furthermore, since there is no need for a\nconnected sensor network, there is no need to deploy sensors in\nvery large numbers.\nThe problem of limited battery life and limited memory of\na sensor is addressed by incorporating the concepts of groups\nand partitions. Groups and partitions allow sensors to stay\nin sleep or receive modes most of the time. Using groups and\npartitions, the location information collected by a sensor can\nbe distributed among several sensors, thereby reducing the\namount of memory needed in one sensor to store that\ninformation. In fact, CenWits provides an adaptive tradeoff\nbetween memory and power consumption of sensors. Each\nsensor can dynamically adjust its power and memory\nconsumption based on its remaining power or available memory.\nIt has amply been noted that the strength of sensor\nnetworks comes from the fact that several sensor nodes can\nbe distributed over a relatively large area to construct a\nmultihop network. This paper demonstrates that important\nlarge-scale applications can be built using sensors by\njudiciously integrating the storage, communication and\ncomputation capabilities of sensors. The paper describes\nimportant techniques to combine memory, transmission and\nbattery power of many sensors to address resource constraints\nin the context of a search and rescue application. However,\nthese techniques are quite general. We discuss several other\nsensor-based applications that can employ these techniques.\nWhile CenWits addresses the general location tracking\nand reporting problem in a wide-area network, there are\ntwo important differences from the earlier work done in this\narea. First, unlike earlier location tracking solutions,\nCenWits does not require a connected network. Second, unlike\nearlier location tracking solutions, CenWits does not aim for\na very high accuracy of localization. Instead, the main goal\nis to provide an approximate, small area where search and\nrescue efforts can be concentrated.\nThe rest of this paper is organized as follows. In Section\n2, we overview some of the recent projects and technologies\nrelated to movement and location tracking, and search and\nrescue systems. In Section 3, we describe the overall\narchitecture of CenWits, and provide a high-level description of\nits functionality. In the next section, Section 4, we discuss\npower and memory management in CenWits. To simplify\nour presentation, we will focus on a specific application of\ntracking lost/injured hikers in all these sections. In\nSection 6, we describe a prototype implementation of CenWits\nand present performance measured from this\nimplementation. We discuss how the ideas of CenWits can be used\nto build several other applications in Section 7. Finally, in\nSection 8, we discuss some related issues and conclude the\npaper.\n2. RELATED WORK\nA survey of location systems for ubiquitous computing\nis provided in [11]. A location tracking system for adhoc\nsensor networks using anchor sensors as reference to gain\nlocation information and spread it out to outer node is\nproposed in [17]. Most location tracking systems in adhoc\nsensor networks are for benefiting geographic-aware routing.\nThey don\"t fit well for our purposes. The well-known\nactive badge system [19] lets a user carry a badge around.\nAn infrared sensor in the room can detect the presence of\na badge and determine the location and identification of\nthe person. This is a useful system for indoor environment,\nwhere GPS doesn\"t work. Locationing using 802.11 devices\nis probably the cheapest solution for indoor position\ntracking [8]. Because of the popularity and low cost of 802.11\ndevices, several business solutions based on this technology\nhave been developed[1].\nA system that combines two mature technologies and is\nviable in suburban area where a user can see clear sky and\nhas GSM cellular reception at the same time is currently\navailable[5]. This system receives GPS signal from a\nsatellite and locates itself, draws location on a map, and sends\nlocation information through GSM network to the others\nwho are interested in the user\"s location.\nA very simple system to monitor children consists an RF\ntransmitter and a receiver. The system alarms the holder\nof the receiver when the transmitter is about to run out of\nrange [6].\nPersonal Locater Beacons (PLB) has been used for avalanche\nrescuing for years. A skier carries an RF transmitter that\nemits beacons periodically, so that a rescue team can find\nhis/her location based on the strength of the RF signal.\nLuxury version of PLB combines a GPS receiver and a\nCOSPASSARSAT satellite transmitter that can transmit user\"s\nlocation in latitude and longitude to the rescue team whenever\nan accident happens [4]. However, the device either is turned\non all the time resulting in fast battery drain, or must be\nturned on after the accident to function.\nAnother related technology in widespread use today is the\nONSTAR system [3], typically used in several luxury cars.\nIn this system, a GPS unit provides position information,\nand a powerful transmitter relays that information via\nsatellite to a customer service center. Designed for emergencies,\nthe system can be triggered either by the user with the push\nof a button, or by a catastrophic accident. Once the system\nhas been triggered, a human representative attempts to gain\ncommunication with the user via a cell phone built as an\nincar device. If contact cannot be made, emergency services\nare dispatched to the location provided by GPS. Like PLBs,\nthis system has several limitations. First, it is heavy and\nexpensive. It requires a satellite transmitter and a connected\nnetwork. If connectivity with either the GPS network or a\ncommunication satellite cannot be maintained, the system\nfails. Unfortunately, these are common obstacles\nencountered in deep canyons, narrow streets in large cities, parking\ngarages, and a number of other places.\n181\nThe Lifetch system uses GPS receiver board combined\nwith a GSM/GPRS transmitter and an RF transmitter in\none wireless sensor node called Intelligent Communication\nUnit (ICU). An ICU first attempts to transmit its location\nto a control center through GSM/GPRS network. If that\nfails, it connects with other ICUs (adhoc network) to\nforward its location information until the information reaches\nan ICU that has GSM/GPRS reception. This ICU then\ntransmits the location information of the original ICU via\nthe GSM/GPRS network.\nZebraNet is a system designed to study the moving\npatterns of zebras [13]. It utilizes two protocols: History-based\nprotocol and flooding protocol. History-based protocol is\nused when the zebras are grazing and not moving around\ntoo much. While this might be useful for tracking zebras,\nit\"s not suitable for tracking hikers because two hikers are\nmost likely to meet each other only once on a trail. In\nthe flooding protocol, a node dumps its data to a neighbor\nwhenever it finds one and doesn\"t delete its own copy until\nit finds a base station. Without considering routing loops,\npacket filtering and grouping, the size of data on a node will\ngrow exponentially and drain the power and memory of a\nsensor node with in a short time. Instead, Cenwits uses a\nfour-phase hand-shake protocol to ensure that a node\ntransmits only as much information as the other node is willing\nto receive. While ZebraNet is designed for a big group of\nsensors moving together in the same direction with same\nspeed, Cenwits is designed to be used in the scenario where\nsensors move in different directions at different speeds.\nDelay tolerant network architecture addresses some\nimportant problems in challenged (resource-constrained)\nnetworks [9]. While this work is mainly concerned with\ninteroperability of challenged networks, some problems related\nto occasionally-connected networks are similar to the ones\nwe have addressed in CenWits.\nAmong all these systems, luxury PLB and Lifetch are\ndesigned for location tracking in wilderness areas. However,\nboth of these systems require a connected network. Luxury\nPLB requires the user to transmit a signal to a satellite,\nwhile Lifetch requires connection to GSM/GPRS network.\nLuxury PLB transmits location information, only when an\naccident happens. However, if the user is buried in the snow\nor falls into a deep canyon, there is almost no chance for the\nsignal to go through and be relayed to the rescue team. This\nis because satellite transmission needs line of sight.\nFurthermore, since there is no known history of user\"s location, it is\nnot possible for the rescue team to infer the current location\nof the user. Another disadvantage of luxury PLB is that a\nsatellite transmitter is very expensive, costing in the range\nof $750. Lifetch attempts to transmit the location\ninformation by GSM/GPRS and adhoc sensor network that uses\nAODV as the routing protocol. However, having a cellular\nreception in remote areas in wilderness areas, e.g.\nAmerican national parks is unlikely. Furthermore, it is extremely\nunlikely that ICUs worn by hikers will be able to form an\nadhoc network in a large wilderness area. This is because\nthe hikers are mobile and it is very unlikely to have several\nICUs placed dense enough to forward packets even on a very\npopular hike route.\nCenWits is designed to address the limitations of systems\nsuch as luxury PLB and Lifetch. It is designed to\nprovide hikers, skiers, and climbers who have their activities\nmainly in wilderness areas a much higher chance to convey\ntheir location information to a control center. It is not\nreliant upon constant connectivity with any communication\nmedium. Rather, it communicates information along from\nuser to user, finally arriving at a control center. Unlike\nseveral of the systems discussed so far, it does not require that\na user\"s unit is constantly turned on. In fact, it can discover\na victim\"s location, even if the victim\"s sensor was off at the\ntime of accident and has remained off since then. CenWits\nsolves one of the greatest problems plaguing modern search\nand rescue systems: it has an inherent on-site storage\ncapability. This means someone within the network will have\naccess to the last-known-location information of a victim,\nand perhaps his bearing and speed information as well.\nFigure 1: Hiker A and Hiker B are are not in the\nrange of each other\n3. CENWITS\nWe describe CenWits in the context of locating lost/injured\nhikers in wilderness areas. Each hiker wears a sensor (MICA2\nmotes in our prototype) equipped with a GPS receiver and\nan RF transmitter. Each sensor is assigned a unique ID and\nmaintains its current location based on the signal received by\nits GPS receiver. It also emits beacons periodically. When\nany two sensors are in the range of one another, they record\nthe presence of each other (witness information), and also\nexchange the witness information they recorded earlier. The\nkey idea here is that if two sensors come with in range of\neach other at any time, they become each other\"s witnesses.\nLater on, if the hiker wearing one of these sensors is lost, the\nother sensor can convey the last known (witnessed) location\nof the lost hiker. Furthermore, by exchanging the witness\ninformation that each sensor recorded earlier, the witness\ninformation is propagated beyond a direct contact between\ntwo sensors.\nTo convey witness information to a processing center or to\na rescue team, access points are established at well-known\nlocations that the hikers are expected to pass through, e.g.\nat the trail heads, trail ends, intersection of different trails,\nscenic view points, resting areas, and so on. Whenever a\nsensor node is in the vicinity of an access point, all witness\ninformation stored in that sensor is automatically dumped\nto the access point. Access points are connected to a\nprocessing center via satellite or some other network1\n. The\nwitness information is downloaded to the processing center\nfrom various access points at regular intervals. In case,\nconnection to an access point is lost, the information from that\n1\nA connection is needed only between access points and a\nprocessing center. There is no need for any connection\nbetween different access points.\n182\naccess point can be downloaded manually, e.g. by UAVs.\nTo estimate the speed, location and direction of a hiker at\nany point in time, all witness information of that hiker that\nhas been collected from various access points is processed.\nFigure 2: Hiker A and Hiker B are in the range\nof each other. A records the presence of B and B\nrecords the presence of A. A and B become each\nother\"s witnesses.\nFigure 3: Hiker A is in the range of an access\npoint. It uploads its recorded witness information\nand clears its memory.\nAn example of how CenWits operates is illustrated in\nFigures 1, 2 and 3. First, hikers A and B are on two close\ntrails, but out of range of each other (Figure 1). This is\na very common scenario during a hike. For example, on a\npopular four-hour hike, a hiker might run into as many as\n20 other hikers. This accounts for one encounter every 12\nminutes on average. A slow hiker can go 1 mile (5,280 feet)\nper hour. Thus in 12 minutes a slow hiker can go as far as\n1056 feet. This implies that if we were to put 20 hikers on a\n4-hour, one-way hike evenly, the range of each sensor node\nshould be at least 1056 feet for them to communicate with\none another continuously. The signal strength starts\ndropping rapidly for two Mica2 nodes to communicate with each\nother when they are 180 feet away, and is completely lost\nwhen they are 230 feet away from each other[7]. So, for the\nsensors to form a sensor network on a 4-hour hiking trail,\nthere should be at least 120 hikers scattered evenly. Clearly,\nthis is extremely unlikely. In fact, in a 4-hour, less-popular\nhiking trail, one might only run into say five other hikers.\nCenWits takes advantage of the fact that sensors can\ncommunicate with one another and record their presence. Given\na walking speed of one mile per hour (88 feet per minute)\nand Mica2 range of about 150 feet for non-line-of-sight radio\ntransmission, two hikers have about 150/88 = 1.7 minutes to\ndiscover the presence of each other and exchange their\nwitness information. We therefore design our system to have\neach sensor emit a beacon every one-and-a-half minute. In\nFigure 2, hiker B\"s sensor emits a beacon when A is in range,\nthis triggers A to exchange data with B. A communicates\nthe following information to B: My ID is A; I saw C at 1:23\nPM at (39\u25e6\n49.3277655\", 105\u25e6\n39.1126776\"), I saw E at 3:09\nPM at (40\u25e6\n49.2234879\", 105\u25e6\n20.3290168\"). B then replies\nwith My ID is B; I saw K at 11:20 AM at (39\u25e6\n51.4531655\",\n105\u25e6\n41.6776223\"). In addition, A records I saw B at 4:17\nPM at (41\u25e6\n29.3177354\", 105\u25e6\n04.9106211\") and B records I\nsaw A at 4:17 PM at (41\u25e6\n29.3177354\", 105\u25e6\n04.9106211\").\nB goes on his way to overnight camping while A heads\nback to trail head where there is an AP, which emits beacon\nevery 5 seconds to avoid missing any hiker. A dumps all\nwitness information it has collected to the access point. This\nis shown in Figure 3.\n3.1 Witness Information: Storage\nA critical concern is that there is limited amount of\nmemory available on motes (4 KB SDRAM memory, 128 KB\nflash memory, and 4-512 KB EEPROM). So, it is important\nto organize witness information efficiently. CenWits stores\nwitness information at each node as a set of witness records\n(Format is shown in Figure 4.\n1 B\nNode ID Record Time X, Y Location Time Hop Count\n1 B 3 B 8 B 3 B\nFigure 4: Format of a witness record.\nWhen two nodes i and j encounter each other, each node\ngenerates a new witness record. In the witness record\ngenerated by i, Node ID is j, Record Time is the current time\nin i\"s clock, (X,Y) are the coordinates of the location of i\nthat i recorded most recently (either from satellite or an\nLP), Location Time is the time when the this location was\nrecorded, and Hop Count is 0.\nEach node is assigned a unique Node Id when it enters a\ntrail. In our current prototype, we have allocated one byte\nfor Node Id, although this can be increased to two or more\nbytes if a large number of hikers are expected to be present\nat the same time. We can represent time in 17 bits to a\nsecond precision. So, we have allocated 3 bytes each for Record\nTime and Location Time. The circumference of the Earth\nis approximately 40,075 KM. If we use a 32-bit number to\nrepresent both longitude and latitude, the precision we get\nis 40,075,000/232\n= 0.0093 meter = 0.37 inches, which is\nquite precise for our needs. So, we have allocated 4 bytes\neach for X and Y coordinates of the location of a node. In\nfact, a foot precision can be achieved by using only 27 bits.\n3.2 Location Point and Location Inference\nAlthough a GPS receiver provides an accurate location\ninformation, it has it\"s limitation. In canyons and rainy\nforests, a GPS receiver does not work. When there is a\nheavy cloud cover, GPS users have experienced inaccuracy\nin the reported location as well. Unfortunately, a lot of\nhiking trails are in dense forests and canyons, and it\"s not that\nuncommon to rain after hikers start hiking. To address this,\nCenWits incorporates the idea of location points (LP). A\nlocation point can update a sensor node with its current\nlocation whenever the node is near that LP. LPs are placed at\ndifferent locations in a wilderness area where GPS receivers\ndon\"t work. An LP is a very simple device that emits\nprerecorded location information at some regular time interval.\nIt can be placed in difficult-to-reach places such as deep\ncanyons and dense rain forests by simply dropping them\nfrom an airplane. LPs allow a sensor node to determine\nits current location more accurately. However, they are not\n183\nan essential requirement of CenWits. If an LP runs out of\npower, the CenWits will continue to work correctly.\nFigure 5: GPS receiver not working correctly.\nSensors then have to rely on LP to provide coordination\nIn Figure 5, B cannot get GPS reception due to bad\nweather. It then runs into A on the trail who doesn\"t have\nGPS reception either. Their sensors record the presence of\neach other. After 10 minutes, A is in range of an LP that\nprovides an accurate location information to A. When A\nreturns to trail head and uploads its data (Figure 6), the\nsystem can draw a circle centered at the LP from which A\nfetched location information for the range of encounter\nlocation of A and B. By Overlapping this circle with the trail\nmap, two or three possible location of encounter can be\ninferred. Thus when a rescue is required, the possible location\nof B can be better inferred (See Figures 7 and 8).\nFigure 6: A is back to trail head, It reports the\ntime of encounter with B to AP, but no location\ninformation to AP\nFigure 7: B is still missing after sunset. CenWits\ninfers the last contact point and draws the circle of\npossible current locations based on average hiking\nspeed\nCenWits requires that the clocks of different sensor nodes\nbe loosely synchronized with one another. Such a\nsynchronization is trivial when GPS coverage is available. In\naddition, sensor nodes in CenWits synchronize their clocks\nwhenever they are in the range of an AP or an LP. The\nFigure 8: Based on overlapping landscape, B might\nhave hiked to wrong branch and fallen off a cliff. Hot\nrescue areas can thus be determined\nsynchronization accuracy Cenwits needs is of the order of a\nsecond or so. As long as the clocks are synchronized with in\none second range, whether A met B at 12:37\"45 or 12:37\"46\ndoesn\"t matter in the ordering of witness events and\ninferring the path.\n4. MEMORY AND POWER MANAGEMENT\nCenWits employs several important mechanisms to\nconserve power and memory. It is important to note while\ncurrent sensor nodes have limited amount of memory,\nfuture sensor nodes are expected to have much more memory.\nWith this in mind, the main focus in our design is to\nprovide a tradeoff between the amount of memory available and\namount of power consumption.\n4.1 Memory Management\nThe size of witness information stored at a node can get\nvery large. This is because the node may come across several\nother nodes during a hike, and may end up accumulating a\nlarge amount of witness information over time. To address\nthis problem, CenWits allows a node to pro-actively free up\nsome parts of its memory periodically. This raises an\ninteresting question of when and which witness record should be\ndeleted from the memory of a node? CenWits uses three\ncriteria to determine this: record count, hop count, and record\ngap.\nRecord count refers to the number of witness records with\nsame node id that a node has stored in its memory. A node\nmaintains an integer parameter MAX RECORD COUNT.\nIt stores at most MAX RECORD COUNT witness records\nof any node.\nEvery witness record has a hop count field that stores the\nnumber times (hops) this record has been transferred since\nbeing created. Initially this field is set to 0. Whenever a\nnode receives a witness record from another node, it\nincrements the hop count of that record by 1. A node maintains\nan integer parameter called MAX HOP COUNT. It keeps\nonly those witness records in its memory, whose hop count\nis less than MAX HOP COUNT. The MAX HOP COUNT\nparameter provides a balance between two conflicting goals:\n(1) To ensure that a witness record has been propagated to\nand thus stored at as many nodes as possible, so that it has\na high probability of being dumped at some AP as quickly\nas possible; and (2) To ensure that a witness record is stored\nonly at a few nodes, so that it does not clog up too much of\nthe combined memory of all sensor nodes. We chose to use\nhop count instead of time-to-live to decide when to drop a\npacket. The main reason for this is that the probability of\na packet reaching an AP goes up as the hop count adds up.\nFor example, when the hop count if 5 for a specific record,\n184\nthe record is in at least 5 sensor nodes. On the other hand,\nif we discard old records, without considering hop count,\nthere is no guarantee that the record is present in any other\nsensor node.\nRecord gap refers to the time difference between the record\ntimes of two witness records with the same node id. To\nsave memory, a node n ensures the the record gap between\nany two witness records with the same node id is at least\nMIN RECORD GAP. For each node id i, n stores the\nwitness record with the most recent record time rti, the witness\nwith most recent record time that is at least MIN RECORD GAP\ntime units before rti, and so on until the record count limit\n(MAX RECORD COUNT) is reached.\nWhen a node is tight in memory, it adjusts the three\nparameters, MAX RECORD COUNT, MAX HOP COUNT and\nMIN RECORD GAP to free up some memory. It\ndecrements MAX RECORD COUNT and MAX HOP COUNT,\nand increments MIN RECORD GAP. It then first erases\nall witness records whose hop count exceeds the reduced\nMAX HOP COUNT value, and then erases witness records\nto satisfy the record gap criteria. Also, when a node has\nextra memory space available, e.g. after dumping its witness\ninformation at an access point, it resets MAX RECORD COUNT,\nMAX HOP COUNT and MIN RECORD GAP to some\npredefined values.\n4.2 Power Management\nAn important advantage of using sensors for tracking\npurposes is that we can regulate the behavior of a sensor node\nbased on current conditions. For example, we mentioned\nearlier that a sensor should emit a beacon every 1.7 minute,\ngiven a hiking speed of 1 mile/hour. However, if a user is\nmoving 10 feet/sec, a beacon should be emitted every 10\nseconds. If a user is not moving at all, a beacon can be\nemitted every 10 minutes. In the night, a sensor can be put\ninto sleep mode to save energy, when a user is not likely to\nmove at all for a relatively longer period of time. If a user\nis active for only eight hours in a day, we can put the sensor\ninto sleep mode for the other 16 hours and thus save 2/3rd\nof the energy.\nIn addition, a sensor node can choose to not send any\nbeacons during some time intervals. For example, suppose\nhiker A has communicated its witness information to three\nother hikers in the last five minutes. If it is running low\non power, it can go to receive mode or sleep mode for the\nnext ten minutes. It goes to receive mode if it is still willing\nto receive additional witness information from hikers that it\nencounters in the next ten minutes. It goes to sleep mode if\nit is extremely low on power.\nThe bandwidth and energy limitations of sensor nodes\nrequire that the amount of data transferred among the nodes\nbe reduced to minimum. It has been observed that in some\nscenarios 3000 instructions could be executed for the same\nenergy cost of sending a bit 100m by radio [15]. To reduce\nthe amount of data transfer, CenWits employs a handshake\nprotocol that two nodes execute when they encounter one\nanother. The goal of this protocol is to ensure that a node\ntransmits only as much witness information as the other\nnode is willing to receive. This protocol is initiated when\na node i receives a beacon containing the node ID of the\nsender node j and i has not exchanged witness information\nwith j in the last \u03b4 time units. Assume that i < j. The\nprotocol consists of four phases (See Figure 9):\n1. Phase I: Node i sends its receive constraints and the\nnumber of witness records it has in its memory.\n2. Phase II: On receiving this message from i, j sends its\nreceive constraints and the number of witness records\nit has in its memory.\n3. Phase III: On receiving the above message from j, i\nsends its witness information (filtered based on receive\nconstraints received in phase II).\n4. Phase IV: After receiving the witness records from\ni, j sends its witness information (filtered based on\nreceive constraints received in phase I).\nj\n\n\n\n\ni j\nj\nj\ni\ni\ni\nFigure 9: Four-Phase Hand Shake Protocol (i < j)\nReceive constraints are a function of memory and power.\nIn the most general case, they are comprised of the three\nparameters (record count, hop count and record gap) used\nfor memory management. If i is low on memory, it specifies\nthe maximum number of records it is willing to accept from\nj. Similarly, i can ask j to send only those records that\nhave hop count value less than MAX HOP COUNT \u2212 1.\nFinally, i can include its MIN RECORD GAP value in its\nreceive constraints. Note that the handshake protocol is\nbeneficial to both i and j. They save memory by receiving\nonly as much information as they are willing to accept and\nconserve energy by sending only as many witness records as\nneeded.\nIt turns out that filtering witness records based on\nMIN RECORD GAP is complex. It requires that the\nwitness records of any given node be arranged in an order sorted\nby their record time values. Maintaining this sorted order is\ncomplex in memory, because new witness records with the\nsame node id can arrive later that may have to be inserted\nin between to preserve the sorted order. For this reason, the\nreceive constraints in the current CenWits prototype do not\ninclude record gap.\nSuppose i specifies a hop count value of 3. In this case,\nj checks the hop count field of every witness record before\nsending them. If the hop count value is greater than 3, the\nrecord is not transmitted.\n4.3 Groups and Partitions\nTo further reduce communication and increase the\nlifetime of our system, we introduce the notion of groups. The\nidea is based on the concept of abstract regions presented\nin [20]. A group is a set of n nodes that can be defined\nin terms of radio connectivity, geographic location, or other\nproperties of nodes. All nodes within a group can\ncommunicate directly with one another and they share information\nto maintain their view of the external world. At any point\nin time, a group has exactly one leader that communicates\n185\nwith external nodes on behalf of the entire group. A group\ncan be static, meaning that the group membership does not\nchange over the period of time, or it could be dynamic in\nwhich case nodes can leave or join the group. To make our\nanalysis simple and to explain the advantages of group, we\nfirst discuss static groups.\nA static group is formed at the start of a hiking trail or ski\nslope. Suppose there are five family members who want to\ngo for a hike in the Rocky Mountain National Park. Before\nthese members start their hike, each one of them is given\na sensor node and the information is entered in the system\nthat the five nodes form a group. Each group member is\ngiven a unique id and every group member knows about\nother members of the group. The group, as a whole, is also\nassigned an id to distinguish it from other groups in the\nsystem.\nFigure 10: A group of five people. Node 2 is the\ngroup leader and it is communicating on behalf of\nthe group with an external node 17. All other\n(shown in a lighter shade) are in sleep mode.\nAs the group moves through the trail, it exchanges\ninformation with other nodes or groups that it comes across. At\nany point in time, only one group member, called the leader,\nsends and receives information on behalf of the group and\nall other n \u2212 1 group members are put in the sleep mode\n(See Figure 10). It is this property of groups that saves\nus energy. The group leadership is time-multiplexed among\nthe group members. This is done to make sure that a single\nnode does not run out of battery due to continuous exchange\nof information. Thus after every t seconds, the leadership\nis passed on to another node, called the successor, and the\nleader (now an ordinary member) is put to sleep. Since\nenergy is dear, we do not implement an extensive election\nalgorithm for choosing the successor. Instead, we choose the\nsuccessor on the basis of node id. The node with the next\nhighest id in the group is chosen as the successor. The last\nnode, of course, chooses the node with the lowest id as its\nsuccessor.\nWe now discuss the data storage schemes for groups.\nMemory is a scarce resource in sensor nodes and it is therefore\nimportant that witness information be stored efficiently among\ngroup members. Efficient data storage is not a trivial task\nwhen it comes to groups. The tradeoff is between simplicity\nof the scheme and memory savings. A simpler scheme\nincurs lesser energy cost as compared to a more sophisticated\nscheme, but offers lesser memory savings as well. This is\nbecause in a more complicated scheme, the group members\nhave to coordinate to update and store information. After\nconsidering a number of different schemes, we have come\nto a conclusion that there is no optimal storage scheme for\ngroups. The system should be able to adapt according to\nits requirements. If group members are low on battery, then\nthe group can adapt a scheme that is more energy efficient.\nSimilarly, if the group members are running out of memory,\nthey can adapt a scheme that is more memory efficient. We\nfirst present a simple scheme that is very energy efficient but\ndoes not offer significant memory savings. We then present\nan alternate scheme that is much more memory efficient.\nAs already mentioned a group can receive information\nonly through the group leader. Whenever the leader comes\nacross an external node e, it receives information from that\nnode and saves it. In our first scheme, when the timeslot for\nthe leader expires, the leader passes this new information it\nreceived from e to its successor. This is important because\nduring the next time slot, if the new leader comes across\nanother external node, it should be able to pass\ninformation about all the external nodes this group has witnessed\nso far. Thus the information is fully replicated on all nodes\nto maintain the correct view of the world.\nOur first scheme does not offer any memory savings but is\nhighly energy efficient and may be a good choice when the\ngroup members are running low on battery. Except for the\ntime when the leadership is switched, all n \u2212 1 members are\nasleep at any given time. This means that a single member\nis up for t seconds once every n\u2217t seconds and therefore has\nto spend approximately only 1/nth\nof its energy. Thus, if\nthere are 5 members in a group, we save 80% energy, which\nis huge. More energy can be saved by increasing the group\nsize.\nWe now present an alternate data storage scheme that\naims at saving memory at the cost of energy. In this scheme\nwe divide the group into what we call partitions. Partitions\ncan be thought of as subgroups within a group. Each\npartition must have at least two nodes in it. The nodes within a\npartition are called peers. Each partition has one peer\ndesignated as partition leader. The partition leader stays in\nreceive mode at all times, while all others peers a partition stay\nin the sleep mode. Partition leadership is time-multiplexed\namong the peers to make sure that a single node does not\nrun out of battery. Like before, a group has exactly one\nleader and the leadership is time-multiplexed among\npartitions. The group leader also serves as the partition leader\nfor the partition it belongs to (See Figure 11).\nIn this scheme, all partition leaders participate in\ninformation exchange. Whenever a group comes across an external\nnode e, every partition leader receives all witness\ninformation, but it only stores a subset of that information after\nfiltering. Information is filtered in such a way that each\npartition leader has to store only B/K bytes of data, where\nK is the number of partitions and B is the total number\nof bytes received from e. Similarly when a group wants to\nsend witness information to e, each partition leader sends\nonly B/K bytes that are stored in the partition it belongs\nto. However, before a partition leader can send information,\nit must switch from receive mode to send mode. Also,\npartition leaders must coordinate with one another to ensure\nthat they do not send their witness information at the same\ntime, i.e. their message do not collide. All this is achieved\nby having the group leader send a signal to every partition\nleader in turn.\n186\nFigure 11: The figure shows a group of eight nodes\ndivided into four partitions of 2 nodes each. Node\n1 is the group leader whereas nodes 2, 9, and 7 are\npartition leaders. All other nodes are in the sleep\nmode.\nSince the partition leadership is time-multiplexed, it is\nimportant that any information received by the partition\nleader, p1, be passed on to the next leader, p2. This has\nto be done to make sure that p2 has all the information\nthat it might need to send when it comes across another\nexternal node during its timeslot. One way of achieving this\nis to wake p2 up just before p1\"s timeslot expires and then\nhave p1 transfer information only to p2. An alternate is to\nwake all the peers up at the time of leadership change, and\nthen have p1 broadcast the information to all peers. Each\npeer saves the information sent by p1 and then goes back\nto sleep. In both cases, the peers send acknowledgement to\nthe partition leader after receiving the information. In the\nformer method, only one node needs to wake up at the time\nof leadership change, but the amount of information that\nhas to be transmitted between the nodes increases as time\npasses. In the latter case, all nodes have to be woken up at\nthe time of leadership change, but small piece of information\nhas to be transmitted each time among the peers. Since\ncommunication is much more expensive than bringing the\nnodes up, we prefer the second method over the first one.\nA group can be divided into partitions in more than one\nway. For example, suppose we have a group of six members.\nWe can divide this group into three partitions of two peers\neach, or two partitions with three peers each. The choice\nonce again depends on the requirements of the system. A\nfew big partitions will make the system more energy efficient.\nThis is because in this configuration, a greater number of\nnodes will stay in sleep mode at any given point in time.\nOn the other hand, several small partitions will make the\nsystem memory efficient, since each node will have to store\nlesser information (See Figure 12).\nA group that is divided into partitions must be able to\nreadjust itself when a node leaves or runs out of battery.\nThis is crucial because a partition must have at least two\nnodes at any point in time to tolerate failure of one node.\nFor example, in figure 3 (a), if node 2 or node 5 dies, the\npartition is left with only one node. Later on, if that single\nnode in the partition dies, all witness information stored in\nthat partition will be lost. We have devised a very simple\nprotocol to solve this problem. We first explain how\npartiFigure 12: The figure shows two different ways of\npartitioning a group of six nodes. In (a), a group\nis divided into three partitions of two nodes. Node\n1 is the group leader, nodes 9 and 5 are partition\nleaders, and nodes 2, 3, and 6 are in sleep mode. In\n(b) the group is divided into two partitions of three\nnodes. Node 1 is the group leader, node 9 is the\npartition leader and nodes 2, 3, 5, and 6 are in sleep\nmode.\ntions are adjusted when a peer dies, and then explain what\nhappens if a partition leader dies.\nSuppose node 2 in figure 3 (a) dies. When node 5, the\npartition leader, sends information to node 2, it does not\nreceive an acknowledgement from it and concludes that node\n2 has died2\n. At this point, node 5 contacts other partition\nleaders (nodes 1 ... 9) using a broadcast message and\ninforms them that one of its peers has died. Upon hearing\nthis, each partition leader informs node 5 (i) the number of\nnodes in its partition, (ii) a candidate node that node 5 can\ntake if the number of nodes in its partition is greater than\n2, and (iii) the amount of witness information stored in its\npartition. Upon hearing from every leader, node 5 chooses\nthe candidate node from the partition with maximum\nnumber (must be greater than 2) of peers, and sends a message\nback to all leaders. Node 5 then sends data to its new peer\nto make sure that the information is replicated within the\npartition.\nHowever, if all partitions have exactly two nodes, then\nnode 5 must join another partition. It chooses the partition\nthat has the least amount of witness information to join. It\nsends its witness information to the new partition leader.\nWitness information and membership update is propagated\nto all peers during the next partition leadership change.\nWe now consider the case where the partition leader dies.\nIf this happens, then we wait for the partition leadership to\nchange and for the new partition leader to eventually find\nout that a peer has died. Once the new partition leader finds\nout that it needs more peers, it proceeds with the protocol\nexplained above. However, in this case, we do lose\ninformation that the previous partition leader might have received\njust before it died. This problem can be solved by\nimplementing a more rigorous protocol, but we have decided to\ngive up on accuracy to save energy.\nOur current design uses time-division multiplexing to\nschedule wakeup and sleep modes in the sensor nodes. However,\nrecent work on radio wakeup sensors [10] can be used to do\nthis scheduling more efficiently. we plan to incorporate radio\nwakeup sensors in CenWits when the hardware is mature.\n2\nThe algorithm to conclude that a node has died can be\nmade more rigorous by having the partition leader query\nthe suspected node a few times.\n187\n5. SYSTEM EVALUATION\nA sensor is constrained in the amount of memory and\npower. In general, the amount of memory needed and power\nconsumption depends on a variety of factors such as node\ndensity, number of hiker encounters, and the number of\naccess points. In this Section, we provide an estimate of how\nlong the power of a MICA2 mote will last under certain\nassumtions.\nFirst, we assume that each sensor node carries about 100\nwitness records. On encountering another hiker, a sensor\nnode transmits 50 witness records and receives 50 new\nwitness records. Since, each record is 16 bytes long, it will take\n0.34 seconds to transmit 50 records and another 0.34\nseconds to receive 50 records over a 19200 bps link. The power\nconsumption of MICA2 due to CPU processing,\ntransmission and reception are approximately 8.0 mA, 7.0 mA and\n8.5 mA per hour respectively [18], and the capacity of an\nalkaline battery is 2500mAh.\nSince the radio module of Mica2 is half-duplex and\nassuming that the CPU is always active when a node is awake,\npower consumption due to transmission is 8 + 8.5 = 16.5\nmA per hour and due to reception is 8 + 7 = 15mA per\nhour. So, average power consumtion due to transmission\nand reception is (16.5 + 15)/2 = 15.75 mA per hour.\nGiven that the capacity of an alkaline battery is 2500\nmAh, a battery should last for 2500/15.75 = 159 hours of\ntransmission and reception. An encounter between two\nhikers results in exchange of about 50 witness records that takes\nabout 0.68 seconds as calculated above. Thus, a single\nalkaline battery can last for (159 \u2217 60 \u2217 60)/0.68 = 841764 hiker\nencounters.\nAssuming that a node emits a beacon every 90 seconds\nand a hiker encounter occurs everytime a beacon is emitted\n(worst-case scenario), a single alkaline battery will last for\n(841764 \u2217 90)/(30 \u2217 24 \u2217 60 \u2217 60) = 29 days. Since, a Mica2\nis equipped with two batteries, a Mica2 sensor can remain\noperation for about two months. Notice that this\ncalculation is preliminary, because it assumes that hikers are active\n24 hours of the day and a hiker encounter occurs every 90\nseconds. In a more realistic scenario, power is expected to\nlast for a much longer time period. Also, this time period\nwill significantly increase when groups of hikers are moving\ntogether.\nFinally, the lifetime of a sensor running on two\nbatteries can definitely be increased significantly by using energy\nscavenging techniques and energy harvesting techniques [16,\n14].\n6. PROTOTYPE IMPLEMENTATION\nWe have implemented a prototype of CenWits on MICA2\nsensor 900MHz running Mantis OS 0.9.1b. One of the sensor\nis equipped with MTS420CA GPS module, which is capable\nof barometric pressure and two-axis acceleration sensing in\naddition to GPS location tracking. We use SiRF, the serial\ncommunication protocol, to control GPS module. SiRF has\na rich command set, but we record only X and Y coordinates.\nA witness record is 16 bytes long. When a node starts up, it\nstores its current location and emits a beacon\nperiodicallyin the prototype, a node emits a beacon every minute.\nWe have conducted a number of experiments with this\nprototype. A detailed report on these experiments with the\nraw data collected and photographs of hikers, access points\netc. is available at http://csel.cs.colorado.edu/\u223chuangjh/\nCenwits.index.htm. Here we report results from three of\nthem. In all these experiments, there are three access points\n(A, B and C) where nodes dump their witness information.\nThese access points also provide location information to the\nnodes that come with in their range. We first show how\nCenWits can be used to determine the hiking trail a hiker is most\nlikely on and the speed at which he is hiking, and identify\nhot search areas in case he is reported missing. Next, we\nshow the results of power and memory management\ntechniques of CenWits in conserving power and memory of a\nsensor node in one of our experiments.\n6.1 Locating Lost Hikers\nThe first experiment is called Direct Contact. It is a very\nsimple experiment in which a single hiker starts from A,\ngoes to B and then C, and finally returns to A (See Figure\n13). The goal of this experiment is to illustrate that\nCenWits can deduce the trail a hiker takes by processing witness\ninformation.\nFigure 13: Direct Contact Experiment\nNode Id Record (X,Y) Location Hop\nTime Time Count\n1 15 (12,7) 15 0\n1 33 (31,17) 33 0\n1 46 (12,23) 46 0\n1 10 (12,7) 10 0\n1 48 (12,23) 48 0\n1 16 (12,7) 16 0\n1 34 (31,17) 34 0\nTable 1: Witness information collected in the direct\ncontact experiment.\nThe witness information dumped at the three access points\nwas then collected and processed at a control center. Part\nof the witness information collected at the control center is\nshown in Table 1. The X,Y locations in this table\ncorrespond to the location information provided by access points\nA, B, and C. A is located at (12,7), B is located at (31,17)\nand C is located at (12,23). Three encounter points\n(between hiker 1 and the three access points) extracted from\n188\nthis witness information are shown in Figure 13 (shown in\nrectangular boxes). For example, A,1 at 16 means 1 came in\ncontact with A at time 16. Using this information, we can\ninfer the direction in which hiker 1 was moving and speed at\nwhich he was moving. Furthermore, given a map of hiking\ntrails in this area, it is clearly possible to identify the hiking\ntrail that hiker 1 took.\nThe second experiment is called Indirect Inference. This\nexperiment is designed to illustrate that the location,\ndirection and speed of a hiker can be inferred by CenWits, even\nif the hiker never comes in the range of any access point. It\nillustrates the importance of witness information in search\nand rescue applications. In this experiment, there are three\nhikers, 1, 2 and 3. Hiker 1 takes a trail that goes along\naccess points A and B, while hiker 3 takes trail that goes along\naccess points C and B. Hiker 2 takes a trail that does not\ncome in the range of any access points. However, this hiker\nmeets hiker 1 and 3 during his hike. This is illustrated in\nFigure 14.\nFigure 14: Indirect Inference Experiment\nNode Id Record (X,Y) Location Hop\nTime Time Count\n2 16 (12,7) 6 0\n2 15 (12,7) 6 0\n1 4 (12,7) 4 0\n1 6 (12,7) 6 0\n1 29 (31,17) 29 0\n1 31 (31,17) 31 0\nTable 2: Witness information collected from hiker 1\nin indirect inference experiment.\nPart of the witness information collected at the control\ncenter from access points A, B and C is shown in Tables\n2 and 3. There are some interesting data in these tables.\nFor example, the location time in some witness records is\nnot the same as the record time. This means that the node\nthat generated that record did not have its most up-to-date\nlocation at the encounter time. For example, when hikers\n1 and 2 meet at time 16, the last recorded location time of\nNode Id Record (X,Y) Location Hop\nTime Time Count\n3 78 (12,23) 78 0\n3 107 (31,17) 107 0\n3 106 (31,17) 106 0\n3 76 (12,23) 76 0\n3 79 (12,23) 79 0\n2 94 (12,23) 79 0\n1 16 (?,?) ? 1\n1 15 (?,?) ? 1\nTable 3: Witness information collected from hiker 3\nin indirect inference experiment.\nhiker 1 is (12,7) recorded at time 6. So, node 1 generates\na witness record with record time 16, location (12,7) and\nlocation time 6. In fact, the last two records in Table 3\nhave (?,?) as their location. This has happened because\nthese witness records were generate by hiker 2 during his\nencounter with 1 at time 15 and 16. Until this time, hiker\n2 hadn\"t come in contact with any location points.\nInterestingly, a more accurate location information of 1\nand 2 encounter or 2 and 3 encounter can be computed by\nprocess the witness information at the control center. It\ntook 25 units of time for hiker 1 to go from A (12,7) to B\n(31,17). Assuming a constant hiking speed and a relatively\nstraight-line hike, it can be computed that at time 16, hiker\n1 must have been at location (18,10). Thus (18,10) is a more\naccurate location of encounter between 1 and 2.\nFinally, our third experiment called Identifying Hot Search\nAreas is designed to determine the trail a hiker has taken\nand identify hot search areas for rescue after he is reported\nmissing. There are six hikers (1, 2, 3, 4, 5 and 6) in this\nexperiment. Figure 15 shows the trails that hikers 1, 2,\n3, 4 and 5 took, along with the encounter points obtained\nfrom witness records collected at the control center. For\nbrevity, we have not shown the entire witness information\ncollected at the control center. This information is available\nat http://csel.cs.colorado.edu/\u223chuangjh/Cenwits/index.htm.\nFigure 15: Identifying Hot Search Area Experiment\n(without hiker 6)\n189\nNow suppose hiker 6 is reported missing at time 260. To\ndetermine the hot search areas, the witness records of hiker\n6 are processed to determine the trail he is most likely on,\nthe speed at which he had been moving, direction in which\nhe had been moving, and his last known location. Based on\nthis information and the hiking trail map, hot search areas\nare identified. The hiking trail taken by hiker 6 as inferred\nby CenWits is shown by a dotted line and the hot search\nareas identified by CenWits are shown by dark lines inside\nthe dotted circle in Figure 16.\nFigure 16: Identifying Hot Search Area Experiment\n(with hiker 6)\n6.2 Results of Power and Memory\nManagement\nThe witness information shown in Tables 1, 2 and 3 has\nnot been filtered using the three criteria described in\nSection 4.1. For example, the witness records generated by 3 at\nrecord times 76, 78 and 79 (see Table 3) have all been\ngenerated due a single contact between access point C and node\n3. By applying the record gap criteria, two of these three\nrecords will be erased. Similarly, the witness records\ngenerated by 1 at record times 10, 15 and 16 (see Table 1) have\nall been generated due a single contact between access point\nA and node 1. Again, by applying the record gap criteria,\ntwo of these three records will be erased. Our experiments\ndid not generate enough data to test the impact of record\ncount or hop count criteria.\nTo evaluate the impact of these criteria, we simulated\nCenWits to generate a significantly large number of records for a\ngiven number of hikers and access points. We generated\nwitness records by having the hikers walk randomly. We applied\nthe three criteria to measure the amount of memory savings\nin a sensor node. The results are shown in Table 4. The\nnumber of hikers in this simulation was 10 and the number\nof access points was 5. The number of witness records\nreported in this table is an average number of witness records\na sensor node stored at the time of dump to an access point.\nThese results show that the three memory management\ncriteria significantly reduces the memory consumption of\nsensor nodes in CenWits. For example, they can reduce\nMAX MIN MAX # of\nRECORD RECORD HOP Witness\nCOUNT GAP COUNT Records\n5 5 5 628\n4 5 5 421\n3 5 5 316\n5 10 5 311\n5 20 5 207\n5 5 4 462\n5 5 3 341\n3 20 3 161\nTable 4: Impact of memory management techniques.\nthe memory consumption by up to 75%. However, these\nresults are premature at present for two reasons: (1) They\nare generated via simulation of hikers walking at random;\nand (2) It is not clear what impact the erasing of witness\nrecords has on the accuracy of inferred location/hot search\nareas of lost hikers. In our future work, we plan to undertake\na major study to address these two concerns.\n7. OTHER APPLICATIONS\nIn addition to the hiking in wilderness areas, CenWits can\nbe used in several other applications, e.g. skiing, climbing,\nwild life monitoring, and person tracking. Since CenWits\nrelies only on intermittent connectivity, it can take advantage\nof the existing cheap and mature technologies, and thereby\nmake tracking cheaper and fairly accurate. Since CenWits\ndoesn\"t rely on keeping track of a sensor holder all time,\nbut relies on maintaining witnesses, the system is relatively\ncheaper and widely applicable. For example, there are some\ndangerous cliffs in most ski resorts. But it is too expensive\nfor a ski resort to deploy a connected wireless sensor network\nthrough out the mountain. Using CenWits, we can deploy\nsome sensors at the cliff boundaries. These boundary\nsensors emit beacons quite frequently, e.g. every second, and\nso can record presence of skiers who cross the boundary and\nfall off the cliff. Ski patrols can cruise the mountains every\nhour, and automatically query the boundary sensor when in\nrange using PDAs. If a PDA shows that a skier has been\nclose to the boundary sensor, the ski patrol can use a long\nrange walkie-talkie to query control center at the resort base\nto check the witness record of the skier. If there is no\nwitness record after the recorded time in the boundary sensor,\nthere is a high chance that a rescue is needed.\nIn wildlife monitoring, a very popular method is to attach\na GPS receiver on the animals. To collect data, either a\nsatellite transmitter is used, or the data collector has to\nwait until the GPS receiver brace falls off (after a year or so)\nand then search for the GPS receiver. GPS transmitters are\nvery expensive, e.g. the one used in geese tracking is $3,000\neach [2]. Also, it is not yet known if continuous radio signal\nis harmful to the birds. Furthermore, a GPS transmitter is\nquite bulky and uncomfortable, and as a result, birds always\ntry to get rid of it. Using CenWits, not only can we record\nthe presence of wildlife, we can also record the behavior\nof wild animals, e.g. lions might follow the migration of\ndeers. CenWits does nor require any bulky and expensive\nsatellite transmitters, nor is there a need to wait for a year\nand search for the braces. CenWits provides a very simple\nand cost-effective solution in this case. Also, access points\n190\ncan be strategically located, e.g. near a water source, to\nincrease chances of collecting up-to-date data. In fact, the\naccess points need not be statically located. They can be\nplaced in a low-altitude plane (e.g a UAV) and be flown over\na wilderness area to collect data from wildlife.\nIn large cities, CenWits can be used to complement GPS,\nsince GPS doesn\"t work indoor and near skyscrapers. If a\nperson A is reported missing, and from the witness records\nwe find that his last contacts were C and D, we can trace\nan approximate location quickly and quite efficiently.\n8. DISCUSSION AND FUTURE WORK\nThis paper presents a new search and rescue system called\nCenWits that has several advantages over the current search\nand rescue systems. These advantages include a\nlooselycoupled system that relies only on intermittent network\nconnectivity, power and storage efficiency, and low cost. It\nsolves one of the greatest problems plaguing modern search\nand rescue systems: it has an inherent on-site storage\ncapability. This means someone within the network will have\naccess to the last-known-location information of a victim,\nand perhaps his bearing and speed information as well. It\nutilizes the concept of witnesses to propagate information,\ninfer current possible location and speed of a subject, and\nidentify hot search and rescue areas in case of emergencies.\nA large part of CenWits design focuses on addressing the\npower and memory limitations of current sensor nodes. In\nfact, power and memory constraints depend on how much\nweight (of sensor node) a hiker is willing to carry and the\ncost of these sensors. An important goal of CenWits is build\nsmall chips that can be implanted in hiking boots or ski\njackets. This goal is similar to the avalanche beacons that\nare currently implanted in ski jackets. We anticipate that\npower and memory will continue to be constrained in such\nan environment.\nWhile the paper focuses on the development of a search\nand rescue system, it also provides some innovative,\nsystemlevel ideas for information processing in a sensor network\nsystem.\nWe have developed and experimented with a basic\nprototype of CenWits at present. Future work includes\ndeveloping a more mature prototype addressing important issues\nsuch as security, privacy, and high availability. There are\nseveral pressing concerns regarding security, privacy, and\nhigh availability in CenWits. For example, an adversary\ncan sniff the witness information to locate endangered\nanimals, females, children, etc. He may inject false information\nin the system. An individual may not be comfortable with\nproviding his/her location and movement information, even\nthough he/she is definitely interested in being located in a\ntimely manner at the time of emergency. In general, people\nin hiking community are friendly and usually trustworthy.\nSo, a bullet-proof security is not really required. However,\nwhen CenWits is used in the context of other applications,\nsecurity requirements may change. Since the sensor nodes\nused in CenWits are fragile, they can fail. In fact, the nature\nand level of security, privacy and high availability support\nneeded in CenWits strongly depends on the application for\nwhich it is being used and the individual subjects involved.\nAccordingly, we plan to design a multi-level support for\nsecurity, privacy and high availability in CenWits.\nSo far, we have experimented with CenWits in a very\nrestricted environment with a small number of sensors. Our\nnext goal is to deploy this system in a much larger and more\nrealistic environment. In particular, discussions are currenly\nunderway to deploy CenWits in the Rocky Mountain and\nYosemite National Parks.\n9. REFERENCES\n[1] 802.11-based tracking system.\nhttp://www.pangonetworks.com/locator.htm.\n[2] Brent geese 2002. http://www.wwt.org.uk/brent/.\n[3] The onstar system. http://www.onstar.com.\n[4] Personal locator beacons with GPS receiver and\nsatellite transmitter. http://www.aeromedix.com/.\n[5] Personal tracking using GPS and GSM system.\nhttp://www.ulocate.com/trimtrac.html.\n[6] Rf based kid tracking system.\nhttp://www.ion-kids.com/.\n[7] F. Alessio. Performance measurements with motes\ntechnology. MSWiM\"04, 2004.\n[8] P. Bahl and V. N. Padmanabhan. RADAR: An\nin-building RF-based user location and tracking\nsystem. IEEE Infocom, 2000.\n[9] K. Fall. A delay-tolerant network architecture for\nchallenged internets. In SIGCOMM, 2003.\n[10] L. Gu and J. Stankovic. Radio triggered wake-up\ncapability for sensor networks. In Real-Time\nApplications Symposium, 2004.\n[11] J. Hightower and G. Borriello. Location systems for\nubiquitous computing. IEEE Computer, 2001.\n[12] W. Jaskowski, K. Jedrzejek, B. Nyczkowski, and\nS. Skowronek. Lifetch life saving system. CSIDC, 2004.\n[13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh,\nand D. Rubenstein. Energy-efficient computing for\nwildlife tracking: design tradeoffs and early\nexperiences with ZebraNet. In ASPLOS, 2002.\n[14] K. Kansal and M. Srivastava. Energy harvesting aware\npower management. In Wireless Sensor Networks: A\nSystems Perspective, 2005.\n[15] G. J. Pottie and W. J. Kaiser. Embedding the\ninternet: wireless integrated network sensors.\nCommunications of the ACM, 43(5), May 2000.\n[16] S. Roundy, P. K. Wright, and J. Rabaey. A study of\nlow-level vibrations as a power source for wireless\nsensor networks. Computer Communications, 26(11),\n2003.\n[17] C. Savarese, J. M. Rabaey, and J. Beutel. Locationing\nin distributed ad-hoc wireless sensor networks.\nICASSP, 2001.\n[18] V. Shnayder, M. Hempstead, B. Chen, G. Allen, and\nM. Welsh. Simulating the power consumption of\nlarge-scale sensor network applications. In Sensys,\n2004.\n[19] R. Want and A. Hopper. Active badges and personal\ninteractive computing objects. IEEE Transactions of\nConsumer Electronics, 1992.\n[20] M. Welsh and G. Mainland. Programming sensor\nnetworks using abstract regions. First USENIX/ACM\nSymposium on Networked Systems Design and\nImplementation (NSDI \"04), 2004.\n191", "keywords": "intermittent network connectivity;gp receiver;pervasive computing;connected network;satellite transmitter;emergency situation;search and rescue;witness;beacon;location tracking system;rf transmitter;group and partition;sensor network;hiker"} {"name": "train_C-52", "title": "Fairness in Dead-Reckoning based Distributed Multi-Player Games", "abstract": "In a distributed multi-player game that uses dead-reckoning vectors to exchange movement information among players, there is inaccuracy in rendering the objects at the receiver due to network delay between the sender and the receiver. The object is placed at the receiver at the position indicated by the dead-reckoning vector, but by that time, the real position could have changed considerably at the sender. This inaccuracy would be tolerable if it is consistent among all players; that is, at the same physical time, all players see inaccurate (with respect to the real position of the object) but the same position and trajectory for an object. But due to varying network delays between the sender and different receivers, the inaccuracy is different at different players as well. This leads to unfairness in game playing. In this paper, we first introduce an error measure for estimating this inaccuracy. Then we develop an algorithm for scheduling the sending of dead-reckoning vectors at a sender that strives to make this error equal at different receivers over time. This algorithm makes the game very fair at the expense of increasing the overall mean error of all players. To mitigate this effect, we propose a budget based algorithm that provides improved fairness without increasing the mean error thereby maintaining the accuracy of game playing. We have implemented both the scheduling algorithm and the budget based algorithm as part of BZFlag, a popular distributed multi-player game. We show through experiments that these algorithms provide fairness among players in spite of widely varying network delays. An additional property of the proposed algorithms is that they require less number of DRs to be exchanged (compared to the current implementation of BZflag) to achieve the same level of accuracy in game playing.", "fulltext": "1. INTRODUCTION\nIn a distributed multi-player game, players are normally\ndistributed across the Internet and have varying delays to each other\nor to a central game server. Usually, in such games, the players are\npart of the game and in addition they may control entities that make\nup the game. During the course of the game, the players and the\nentities move within the game space. A player sends information\nabout her movement as well as the movement of the entities she\ncontrols to the other players using a Dead-Reckoning (DR) vector.\nA DR vector contains information about the current position of the\nplayer/entity in terms of x, y and z coordinates (at the time the DR\nvector was sent) as well as the trajectory of the entity in terms of\nthe velocity component in each of the dimensions. Each of the\nparticipating players receives such DR vectors from one another and\nrenders the other players/entities on the local consoles until a new\nDR vector is received for that player/entity. In a peer-to-peer game,\nplayers send DR vectors directly to each other; in a client-server\ngame, these DR vectors may be forwarded through a game server.\nThe idea of DR is used because it is almost impossible for\nplayers/entities to exchange their current positions at every time unit.\nDR vectors are quantization of the real trajectory (which we refer\nto as real path) at a player. Normally, a new DR vector is computed\nand sent whenever the real path deviates from the path extrapolated\nusing the previous DR vector (say, in terms of distance in the x, y,\nz plane) by some amount specified by a threshold. We refer to the\ntrajectory that can be computed using the sequence of DR vectors\nas the exported path. Therefore, at the sending player, there is a\ndeviation between the real path and the exported path. The error due\nto this deviation can be removed if each movement of player/entity\nis communicated to the other players at every time unit; that is a\nDR vector is generated at every time unit thereby making the real\nand exported paths the same. Given that it is not feasible to\nsatisfy this due to bandwidth limitations, this error is not of practical\ninterest. Therefore, the receiving players can, at best, follow the\nexported path. Because of the network delay between the sending\nand receiving players, when a DR vector is received and rendered\nat a player, the original trajectory of the player/entity may have\nalready changed. Thus, in physical time, there is a deviation at the\nreceiving player between the exported path and the rendered\ntrajectory (which we refer to as placed path). We refer to this error\nas the export error. Note that the export error, in turn, results in a\ndeviation between the real and the placed paths.\nThe export error manifests itself due to the deviation between the\nexported path at the sender and the placed path at the receiver (i)\n1\nbefore the DR vector is received at the receiver (referred to as the\nbefore export error, and (ii) after the DR vector is received at the\nreceiver (referred to as the after export error). In an earlier paper [1],\nwe showed that by synchronizing the clocks at all the players and\nby using a technique based on time-stamping messages that carry\nthe DR vectors, we can guarantee that the after export error is made\nzero. That is, the placed and the exported paths match after the DR\nvector is received. We also showed that the before export error can\nnever be eliminated since there is always a non-zero network delay,\nbut can be significantly reduced using our technique [1].\nHenceforth we assume that the players use such a technique which results\nin unavoidable but small overall export error.\nIn this paper we consider the problem of different and varying\nnetwork delays between each sender-receiver pair of a DR vector,\nand consequently, the different and varying export errors at the\nreceivers. Due to the difference in the export errors among the\nreceivers, the same entity is rendered at different physical time at\ndifferent receivers. This brings in unfairness in game playing. For\ninstance a player with a large delay would always see an entity\nlate in physical time compared to the other players and,\ntherefore, her action on the entity would be delayed (in physical time)\neven if she reacted instantaneously after the entity was rendered.\nOur goal in this paper is to improve the fairness of these games in\nspite of the varying network delays by equalizing the export error\nat the players. We explore whether the time-average of the export\nerrors (which is the cumulative export error over a period of time\naveraged over the time period) at all the players can be made the\nsame by scheduling the sending of the DR vectors appropriately at\nthe sender. We propose two algorithms to achieve this.\nBoth the algorithms are based on delaying (or dropping) the\nsending of DR vectors to some players on a continuous basis to\ntry and make the export error the same at all the players. At an\nabstract level, the algorithm delays sending DR vectors to players\nwhose accumulated error so far in the game is smaller than others;\nthis would mean that the export error due to this DR vector at these\nplayers will be larger than that of the other players, thereby making\nthem the same. The goal is to make this error at least approximately\nequal at every DR vector with the deviation in the error becoming\nsmaller as time progresses.\nThe first algorithm (which we refer to as the scheduling\nalgorithm) is based on estimating the delay between players and\nrefining the sending of DR vectors by scheduling them to be sent\nto different players at different times at every DR generation point.\nThrough an implementation of this algorithm using the open source\ngame BZflag, we show that this algorithm makes the game very fair\n(we measure fairness in terms of the standard deviation of the\nerror). The drawback of this algorithm is that it tends to push the\nerror of all the players towards that of the player with the worst\nerror (which is the error at the farthest player, in terms of delay,\nfrom the sender of the DR). To alleviate this effect, we propose\na budget based algorithm which budgets how the DRs are sent to\ndifferent players. At a high level, the algorithm is based on the\nidea of sending more DRs to players who are farther away from\nthe sender compared to those who are closer. Experimental results\nfrom BZflag illustrates that the budget based algorithm follows a\nmore balanced approach. It improves the fairness of the game but\nat the same time does so without pushing up the mean error of the\nplayers thereby maintaining the accuracy of the game. In addition,\nthe budget based algorithm is shown to achieve the same level of\naccuracy of game playing as the current implementation of BZflag\nusing much less number of DR vectors.\n2. PREVIOUS WORK\nEarlier work on network games to deal with network latency has\nmostly focussed on compensation techniques for packet delay and\nloss [2, 3, 4]. These methods are aimed at making large delays and\nmessage loss tolerable for players but does not consider the\nproblems that may be introduced by varying delays from the server to\ndifferent players or from the players to one another. For example,\nthe concept of local lag has been used in [3] where each player\ndelays every local operation for a certain amount of time so that\nremote players can receive information about the local operation and\nexecute the same operation at the about same time, thus reducing\nstate inconsistencies. The online multi-player game MiMaze [2, 5,\n6], for example, takes a static bucket synchronization approach to\ncompensate for variable network delays. In MiMaze, each player\ndelays all events by 100 ms regardless of whether they are\ngenerated locally or remotely. Players with a network delay larger\nthan 100 ms simply cannot participate in the game. In general,\ntechniques based on bucket synchronization depend on imposing a\nworst case delay on all the players.\nThere have been a few papers which have studied the problem of\nfairness in a distributed game by more sophisticated message\ndelivery mechanisms. But these works [7, 8] assume the existence of\na global view of the game where a game server maintains a view\n(or state) of the game. Players can introduce objects into the game\nor delete objects that are already part of the game (for example, in\na first-person shooter game, by shooting down the object). These\nadditions and deletions are communicated to the game server\nusing action messages. Based on these action messages, the state\nof the game is changed at the game server and these changes are\ncommunicated to the players using update messages. Fairness is\nachieved by ordering the delivery of action and update messages at\nthe game server and players respectively based on the notion of a\nfair-order which takes into account the delays between the game\nserver and the different players. Objects that are part of the game\nmay move but how this information is communicated to the players\nseems to be beyond the scope of these works. In this sense, these\nworks are very limited in scope and may be applicable only to\nfirstperson shooter games and that too to only games where players are\nnot part of the game.\nDR vectors can be exchanged directly among the players\n(peerto-peer model) or using a central server as a relay (client-server\nmodel). It has been shown in [9] that multi-player games that\nuse DR vectors together with bucket synchronization are not\ncheatproof unless additional mechanisms are put in place. Both the\nscheduling algorithm and the budget-based algorithm described in\nour paper use DR vectors and hence are not cheat-proof. For\nexample, a receiver could skew the delay estimate at the sender to\nmake the sender believe that the delay between the sender and the\nreceiver is high thereby gaining undue advantage. We emphasize\nthat the focus of this paper is on fairness without addressing the\nissue of cheating.\nIn the next section, we describe the game model that we use\nand illustrate how senders and receivers exchange DR vectors and\nhow entities are rendered at the receivers based on the time-stamp\naugmented DR vector exchange as described in [1]. In Section 4,\nwe describe the DR vector scheduling algorithm that aims to make\nthe export error equal across the players with varying delays from\nthe sender of a DR vector, followed by experimental results\nobtained from instrumentation of the scheduling algorithm on the\nopen source game BZFlag. Section 5, describes the budget based\nalgorithm that achieves improved fairness but without reducing the\nlevel accuracy of game playing. Conclusions are presented in\nSection 6.\n2\n3. GAME MODEL\nThe game architecture is based on players distributed across the\nInternet and exchanging DR vectors to each other. The DR\nvectors could either be sent directly from one player to another\n(peerto-peer model) or could be sent through a game server which\nreceives the DR vector from a player and forwards it to other players\n(client-server model). As mentioned before, we assume\nsynchronized clocks among the participating players.\nEach DR vector sent from one player to another specifies the\ntrajectory of exactly one player/entity. We assume a linear DR vector\nin that the information contained in the DR vector is only enough at\nthe receiving player to compute the trajectory and render the entity\nin a straight line path. Such a DR vector contains information about\nthe starting position and velocity of the player/entity where the\nvelocity is constant1\n. Thus, the DR vectors sent by a player specifies\nthe current time at the player when the DR vector is computed (not\nthe time at which this DR vector is sent to the other players as we\nwill explain later), the current position of the player/entity in terms\nof the x, y, z coordinates and the velocity vector in the direction\nof x, y and z coordinates. Specifically, the ith\nDR vector sent by\nplayer j about the kth\nentity is denoted by DRj\nik and is represented\nby the following tuple (Tj\nik, xj\nik, yj\nik, zj\nik, vxj\nik, vyj\nik, vzj\nik).\nWithout loss of generality, in the rest of the discussion, we\nconsider a sequence of DR vectors sent by only one player and for\nonly one entity. For simplicity, we consider a two dimensional\ngame space rather than a three dimensional one. Hence we use\nDRi to denote the ith\nsuch DR vector represented as the tuple\n(Ti, xi, yi, vxi, vyi). The receiving player computes the starting\nposition for the entity based on xi, yi and the time difference\nbetween when the DR vector is received and the time Ti at which it\nwas computed. Note that the computation of time difference is\nfeasible since all the clocks are synchronized. The receiving player\nthen uses the velocity components to project and render the\ntrajectory of the entity. This trajectory is followed until a new DR vector\nis received which changes the position and/or velocity of the entity.\ntimeT1\nReal\nExported\nPlaced\ndt1\nA\nB\nC\nD\nDR1\n= (T1, x1, y1, vx1, vy1)\ncomputed at time T1 and\nsent to the receiver\nDR0\n= (T0, x0, y0, vx0, vy0)\ncomputed at time T0 and\nsent to the receiver\nT0\ndt0\nPlaced\nE\nFigure 1: Trajectories and deviations.\nBased on this model, Figure 1 illustrates the sending and\nreceiv1\nOther type of DR vectors include quadratic DR vectors which\nspecify the acceleration of the entity and cubic spline DR vectors\nthat consider the starting position and velocity and the ending\nposition and velocity of the entity.\ning of DR vectors and the different errors that are encountered. The\nfigure shows the reception of DR vectors at a player (henceforth\ncalled the receiver). The horizontal axis shows the time which is\nsynchronized among all the players. The vertical axis tries to\nconceptually capture the two-dimensional position of an entity.\nAssume that at time T0 a DR vector DR0 is computed by the sender\nand immediately sent to the receiver. Assume that DR0 is received\nat the receiver after a delay of dt0 time units. The receiver\ncomputes the initial position of the entity as (x0 + vx0 \u00d7 dt0, y0 +\nvy0 \u00d7 dt0) (shown as point E). The thick line EBD represents the\nprojected and rendered trajectory at the receiver based on the\nvelocity components vx0 and vy0 (placed path). At time T1 a DR vector\nDR1 is computed for the same entity and immediately sent to the\nreceiver2\n. Assume that DR1 is received at the receiver after a delay\nof dt1 time units. When this DR vector is received, assume that the\nentity is at point D. A new position for the entity is computed as\n(x1 + vx1 \u00d7 dt1, y1 + vy0 \u00d7 dt1) and the entity is moved to this\nposition (point C). The velocity components vx1 and vy1 are used\nto project and render this entity further.\nLet us now consider the error due to network delay. Although\nDR1 was computed at time T1 and sent to the receiver, it did not\nreach the receiver until time T1 + dt1. This means, although the\nexported path based on DR1 at the sender at time T1 is the\ntrajectory AC, until time T1 + dt1, at the receiver, this entity was being\nrendered at trajectory BD based on DR0. Only at time T1 + dt1\ndid the entity get moved to point C from which point onwards the\nexported and the placed paths are the same. The deviation between\nthe exported and placed paths creates an error component which we\nrefer to as the export error. A way to represent the export error is\nto compute the integral of the distance between the two trajectories\nover the time when they are out of sync. We represent the integral\nof the distances between the placed and exported paths due to some\nDR DRi over a time interval [t1, t2] as Err(DRi, t1, t2). In the\nfigure, the export error due to DR1 is computed as the integral of\nthe distance between the trajectories AC and BD over the time\ninterval [T1, T1 + dt1]. Note that there could be other ways of\nrepresenting this error as well, but in this paper, we use the integral of\nthe distance between the two trajectories as a measure of the export\nerror. Note that there would have been an export error created due\nto the reception of DR0 at which time the placed path would have\nbeen based on a previous DR vector. This is not shown in the figure\nbut it serves to remind the reader that the export error is cumulative\nwhen a sequence of DR vectors are received. Starting from time\nT1 onwards, there is a deviation between the real and the exported\npaths. As we discussed earlier, this export error is unavoidable.\nThe above figure and example illustrates one receiver only. But\nin reality, DR vectors DR0 and DR1 are sent by the sender to all\nthe participating players. Each of these players receives DR0 and\nDR1 after varying delays thereby creating different export error\nvalues at different players. The goal of the DR vector scheduling\nalgorithm to be described in the next section is to make this\n(cumulative) export error equal at every player independently for each of\nthe entities that make up the game.\n4. SCHEDULING ALGORITHM\nFORSENDING DR VECTORS\nIn Section 3 we showed how delay from the sender of a new DR\n2\nNormally, DR vectors are not computed on a periodic basis but\non an on-demand basis where the decision to compute a new DR\nvector is based on some threshold being exceeded on the deviation\nbetween the real path and the path exported by the previous DR\nvector.\n3\nvector to the receiver of the DR vector could lead to export error\nbecause of the deviation of the placed path from the exported path\nat the receiver until this new DR vector is received. We also\nmentioned that the goal of the DR vector scheduling algorithm is to\nmake the export error equal at all receivers over a period of time.\nSince the game is played in a distributed environment, it makes\nsense for the sender of an entity to keep track of all the errors at the\nreceivers and try to make them equal. However, the sender cannot\nknow the actual error at a receiver till it gets some information\nregarding the error back from the receiver. Our algorithm estimates\nthe error to compute a schedule to send DR vectors to the receivers\nand corrects the error when it gets feedbacks from the receivers. In\nthis section we provide motivations for the algorithm and describe\nthe steps it goes through. Throughout this section, we will use the\nfollowing example to illustrate the algorithm.\ntimeT1\nExported path\nPlaced path\nat receiver 2\ndt1\nA\nB\nC\nD\nE\nF\nT0\nG2\nG1\ndt2\nDR1 sent\nto receiver 1\nDR1 sent\nto receiver 2\nT1\n1 T1\n2\nda1\nda2\nG\nH\nI\nJ\nK\nL\nN\nM\nDR1 estimated\nto be received\nby receiver 2\nDR1 estimated\nto be received\nby receiver 1\nDR1 actually\nreceived by\nreceiver 1\nDR1 actually\nreceived by\nreceiver 2\nDR0 sent to\nboth receivers\nDR1 computed\nby sender\nPlaced path\nat receiver 1\nFigure 2: DR vector flow between a sender and two receivers\nand the evolution of estimated and actual placed paths at the\nreceivers. DR0 = (T0, T0, x0, y0, vx0, vy0), sent at time T0 to\nboth receivers. DR1 = (T1, T1\n1 , x1, y1, vx1, vy1) sent at time\nT1\n1 = T1+\u03b41 to receiver 1 and DR1 = (T1, T2\n1 , x1, y1, vx1, vy1)\nsent at time T2\n1 = T1 + \u03b42 to receiver 2.\nConsider the example in Figure 2. The figure shows a single\nsender sending DR vectors for an entity to two different receivers\n1 and 2. DR0 computed at T0 is sent and received by the receivers\nsometime between T0 and T1 at which time they move the location\nof the entity to match the exported path. Thus, the path of the\nentity is shown only from the point where the placed path matches\nthe exported path for DR0. Now consider DR1. At time T1, DR1\nis computed by the sender but assume that it is not immediately\nsent to the receivers and is only sent after time \u03b41 to receiver 1\n(at time T1\n1 = T1 + \u03b41) and after time \u03b42 to receiver 2 (at time\nT2\n1 = T1 + \u03b42). Note that the sender includes the sending\ntimestamp with the DR vector as shown in the figure. Assume that\nthe sender estimates (it will be clear shortly why the sender has to\nestimate the delay) that after a delay of dt1, receiver 1 will receive\nit, will use the coordinate and velocity parameters to compute the\nentity\"s current location and move it there (point C) and from this\ntime onwards, the exported and the placed paths will become the\nsame. However, in reality, receiver 1 receives DR1 after a delay\nof da1 (which is less than sender\"s estimates of dt1), and moves\nthe corresponding entity to point H. Similarly, the sender estimates\nthat after a delay of dt2, receiver 2 will receive DR1, will compute\nthe current location of the entity and move it to that point (point\nE), while in reality it receives DR1 after a delay of da2 > dt2 and\nmoves the entity to point N. The other points shown on the placed\nand exported paths will be used later in the discussion to describe\ndifferent error components.\n4.1 Computation of Relative Export Error\nReferring back to the discussion from Section 3, from the sender\"s\nperspective, the export error at receiver 1 due to DR1 is given\nby Err(DR1, T1, T1 + \u03b41 + dt1) (the integral of the distance\nbetween the trajectories AC and DB over the time interval [T1, T1 +\n\u03b41 + dt1]) of Figure 2. This is due to the fact that the sender uses\nthe estimated delay dt1 to compute this error. Similarly, the\nexport error from the sender\"s perspective at received 2 due to DR1\nis given by Err(DR1, T1, T1 + \u03b42 + dt2) (the integral of the\ndistance between the trajectories AE and DF over the time interval\n[T1, T1 + \u03b42 + dt2]). Note that the above errors from the sender\"s\nperspective are only estimates. In reality, the export error will be\neither smaller or larger than the estimated value, based on whether\nthe delay estimate was larger or smaller than the actual delay that\nDR1 experienced. This difference between the estimated and the\nactual export error is the relative export error (which could either\nbe positive or negative) which occurs for every DR vector that is\nsent and is accumulated at the sender.\nThe concept of relative export error is illustrated in Figure 2.\nSince the actual delay to receiver 1 is da1, the export error\ninduced by DR1 at receiver 1 is Err(DR1, T1, T1 + \u03b41 + da1).\nThis means, there is an error in the estimated export error and the\nsender can compute this error only after it gets a feedback from the\nreceiver about the actual delay for the delivery of DR1, i.e., the\nvalue of da1. We propose that once receiver 1 receives DR1, it\nsends the value of da1 back to the sender. The receiver can\ncompute this information as it knows the time at which DR1 was sent\n(T1\n1 = T1 + \u03b41, which is appended to the DR vector as shown in\nFigure 2) and the local receiving time (which is synchronized with\nthe sender\"s clock). Therefore, the sender computes the relative\nexport error for receiver 1, represented using R1 as\nR1 = Err(DR1, T1, T1 + \u03b41 + dt1)\n\u2212 Err(DR1, T1, T1 + \u03b41 + da1)\n= Err(DR1, T1 + \u03b41 + dt1, T1 + \u03b41 + da1)\nSimilarly the relative export error for receiver 2 is computed as\nR2 = Err(DR1, T1, T1 + \u03b42 + dt2)\n\u2212 Err(DR1, T1, T1 + \u03b42 + da2)\n= Err(DR1, T1 + \u03b42 + dt2, T1 + \u03b42 + da2)\nNote that R1 > 0 as da1 < dt1, and R2 < 0 as da2 > dt2.\nRelative export errors are computed by the sender as and when it\nreceives the feedback from the receivers. This example shows the\n4\nrelative export error values after DR1 is sent and the corresponding\nfeedbacks are received.\n4.2 Equalization of Error Among Receivers\nWe now explain what we mean by making the errors equal\nat all the receivers and how this can be achieved. As stated\nbefore the sender keeps estimates of the delays to the receivers, dt1\nand dt2 in the example of Figure 2. This says that at time T1\nwhen DR1 is computed, the sender already knows how long it may\ntake messages carrying this DR vector to reach the receivers. The\nsender uses this information to compute the export errors, which are\nErr(DR1, T1, T1 + \u03b41 + dt1) and Err(DR1, T1, T1 + \u03b42 + dt2)\nfor receivers 1 and 2, respectively. Note that the areas of these error\ncomponents are a function of \u03b41 and \u03b42 as well as the network\ndelays dt1 and dt2. If we are to make the exports errors due to DR1\nthe same at both receivers, the sender needs to choose \u03b41 and \u03b42\nsuch that\nErr(DR1, T1, T1 + \u03b41 + dt1) = Err(DR1, T1, T1 + \u03b42 + dt2).\nBut when T1 was computed there could already have been\naccumulated relative export errors due to previous DR vectors (DR0 and\nthe ones before). Let us represent the accumulated relative error up\nto DRi for receiver j as Ri\nj. To accommodate these accumulated\nrelative errors, the sender should now choose \u03b41 and \u03b42 such that\nR0\n1 + Err(DR1, T1, T1 + \u03b41 + dt1) =\nR0\n2 + Err(DR1, T1, T1 + \u03b42 + dt2)\nThe \u03b4i determines the scheduling instant of the DR vector at the\nsender for receiver i. This method of computation of \u03b4\"s ensures\nthat the accumulated export error (i.e., total actual error) for each\nreceiver equalizes at the transmission of each DR vector.\nIn order to establish this, assume that the feedback for DR vector\nDi from a receiver comes to the sender before schedule for Di+1 is\ncomputed. Let Si\nm and Ai\nm denote the estimated error for receiver\nm used for computing schedule for Di and accumulated error for\nreceiver m computed after receiving feedback for Di, respectively.\nThen Ri\nm = Ai\nm \u2212Si\nm. In order to compute the schedule instances\n(i.e., \u03b4\"s) for Di, for any pair of receivers m and n, we do Ri\u22121\nm +\nSi\nm = Ri\u22121\nn + Si\nn. The following theorem establishes the fact\nthat the accumulated export error is equalized at every scheduling\ninstant.\nTHEOREM 4.1. When the schedule instances for sending Di\nare computed for any pair of receivers m and n, the following\ncondition is satisfied:\ni\u22121\nk=1\nAk\nm + Si\nm =\ni\u22121\nk=1\nAk\nn + Si\nn.\nProof: By induction. Assume that the premise holds for some i.\nWe show that it holds for i+1. The base case for i = 1 holds since\ninitially R0\nm = R0\nn = 0, and the S1\nm = S1\nn is used to compute the\nscheduling instances.\nIn order to compute the schedule for Di+1, the we first compute\nthe relative errors as\nRi\nm = Ai\nm \u2212 Si\nm, and Ri\nn = Ai\nn \u2212 Si\nn.\nThen to compute \u03b4\"s we execute\nRi\nm + Si+1\nm = Ri\nn + Si+1\nn\nAi\nm \u2212 Si\nm + Si+1\nm = Ai\nn \u2212 Si\nn + Si+1\nn .\nAdding the condition of the premise on both sides we get,\ni\nk=1\nAk\nm + Si+1\nm =\ni\nk=1\nAk\nn + Si+1\nn .\n4.3 Computation of the Export Error\nLet us now consider how the export errors can be computed.\nFrom the previous section, to find \u03b41 and \u03b42 we need to find\nErr(DR1, T1, T1 +\u03b41 +dt1) and Err(DR1, T1, T1 +\u03b42 +dt2).\nNote that the values of R0\n1 and R0\n2 are already known at the sender.\nConsider the computation of Err(DR1, T1, T1 +\u03b41 +dt1). This is\nthe integral of the distance between the trajectories AC due to DR1\nand BD due to DR0. From DR0 and DR1, point A is (X1, Y1) =\n(x1, y1) and point B is (X0, Y0) = (x0 + (T1 \u2212 T0) \u00d7 vx0, y0 +\n(T1 \u2212 T0) \u00d7 vy0). The trajectory AC can be represented as a\nfunction of time as (X1(t), Y1(t) = (X1 + vx1 \u00d7 t, Y1 + vy1 \u00d7 t)\nand the trajectory of BD can be represented as (X0(t), Y0(t) =\n(X0 + vx0 \u00d7 t, Y0 + vy0 \u00d7 t).\nThe distance between the two trajectories as a function of time\nthen becomes,\ndist(t) = (X1(t) \u2212 X0(t))2 + (Y1(t) \u2212 Y0(t))2\n= ((X1 \u2212 X0) + (vx1 \u2212 vx0)t)2\n+((Y1 \u2212 Y0) + (vy1 \u2212 vy0)t)2\n= ((vx1 \u2212 vx0)2 + (vy1 \u2212 vy0)2)t2\n+2((X1 \u2212 X0)(vx1 \u2212 vx0)\n+(Y1 \u2212 Y0)(vy1 \u2212 vy0))t\n+(X1 \u2212 X0)2 + (Y1 \u2212 Y0)2\nLet\na = (vx1 \u2212 vx0)2\n+ (vy1 \u2212 vy0)2\nb = 2((X1 \u2212 X0)(vx1 \u2212 vx0)\n+(Y1 \u2212 Y0)(vy1 \u2212 vy0))\nc = (X1 \u2212 X0)2\n+ (Y1 \u2212 Y0)2\nThen dist(t) can be written as\ndist(t) = a \u00d7 t2 + b \u00d7 t + c.\nThen Err(DR1, t1, t2) for some time interval [t1, t2] becomes\nt2\nt1\ndist(t) dt =\nt2\nt1\na \u00d7 t2 + b \u00d7 t + c dt.\nA closed form solution for the indefinite integral\na \u00d7 t2 + b \u00d7 t + c dt =\n(2at + b)\n\u221a\nat2 + bt + c\n4a\n+\n1\n2\nln\n1\n2b\n+ at\n\u221a\na\n+ at2 + bt + c c\n1\n\u221a\na\n\u2212\n1\n8\nln\n1\n2b\n+ at\n\u221a\na\n+ at2 + bt + c b2\na\u2212 3\n2\nErr(DR1, T1, T1 +\u03b41 +dt1) and Err(DR1, T1, T1 +\u03b42 +dt2)\ncan then be calculated by applying the appropriate limits to the\nabove solution. In the next section, we consider the computation\nof the \u03b4\"s for N receivers.\n5\n4.4 Computation of Scheduling Instants\nWe again look at the computation of \u03b4\"s by referring to Figure 2.\nThe sender chooses \u03b41 and \u03b42 such that R0\n1 + Err(DR1, T1, T1 +\n\u03b41 +dt1) = R0\n2 +Err(DR1, T1, T1 +\u03b42 +dt2). If R0\n1 and R0\n2 both\nare zero, then \u03b41 and \u03b42 should be chosen such that Err(DR1, T1, T1+\n\u03b41 +dt1) = Err(DR1, T1, T1 +\u03b42 +dt2). This equality will hold\nif \u03b41 + dt1 = \u03b42 + dt2. Thus, if there is no accumulated relative\nexport error, all that the sender needs to do is to choose the \u03b4\"s in\nsuch a way that they counteract the difference in the delay to the\ntwo receivers, so that they receive the DR vector at the same time.\nAs discussed earlier, because the sender is not able to a priori learn\nthe delay, there will always be an accumulated relative export error\nfrom a previous DR vector that does have to be taken into account.\nTo delve deeper into this, consider the computation of the\nexport error as illustrated in the previous section. To compute the\n\u03b4\"s we require that R0\n1 + Err(DR1, T1, T1 + \u03b41 + dt1) = R0\n2 +\nErr(DR1, T1, T1 + \u03b42 + dt2). That is,\nR0\n1 +\nT1+\u03b41+dt1\nT1\ndist(t) dt = R0\n2 +\nT1+\u03b42+dt2\nT1\ndist(t) dt.\nThat is\nR0\n1 +\nT1+dt1\nT1\ndist(t) dt +\nT1+dt1+\u03b41\nT1+dt1\ndist(t) dt =\nR0\n2 +\nT1+dt2\nT1\ndist(t) dt +\nT1+dt2+\u03b42\nT1+dt2\ndist(t) dt.\nThe components R0\n1, R0\n2, are already known to (or estimated by)\nthe sender. Further, the error components\nT1+dt1\nT1\ndist(t) dt and\nT1+dt2\nT1\ndist(t) dt can be a priori computed by the sender using\nestimated values of dt1 and dt2. Let us use E1 to denote R0\n1 +\nT1+dt1\nT1\ndist(t) dt and E2 to denote R0\n2 +\nT1+dt2\nT1\ndist(t) dt.\nThen, we require that\nE1 +\nT1+dt1+\u03b41\nT1+dt1\ndist(t) dt = E2 +\nT1+dt2+\u03b42\nT1+dt2\ndist(t) dt.\nAssume that E1 > E2. Then, for the above equation to hold, we\nrequire that\nT1+dt1+\u03b41\nT1+dt1\ndist(t) dt <\nT1+dt2+\u03b42\nT1+dt2\ndist(t) dt.\nTo make the game as fast as possible within this framework, the \u03b4\nvalues should be made as small as possible so that DR vectors are\nsent to the receivers as soon as possible subject to the fairness\nrequirement. Given this, we would choose \u03b41 to be zero and compute\n\u03b42 from the equation\nE1 = E2 +\nT1+dt2+\u03b42\nT1+dt2\ndist(t) dt.\nIn general, if there are N receivers 1, . . . , N, when a sender\ngenerates a DR vector and decides to schedule them to be sent, it first\ncomputes the Ei values for all of them from the accumulated\nrelative export errors and estimates of delays. Then, it finds the smallest\nof these values. Let Ek be the smallest value. The sender makes \u03b4k\nto be zero and computes the rest of the \u03b4\"s from the equality\nEi +\nT1+dti+\u03b4i\nT1+dti\ndist(t) dt = Ek,\n\u2200i 1 \u2264 i \u2264 N, i = k. (1)\nThe \u03b4\"s thus obtained gives the scheduling instants of the DR\nvector for the receivers.\n4.5 Steps of the Scheduling Algorithm\nFor the purpose of the discussion below, as before let us denote\nthe accumulated relative export error at a sender for receiver k up\nuntil DRi to be Ri\nk. Let us denote the scheduled delay at the sender\nbefore DRi is sent to receiver k as \u03b4i\nk. Given the above discussion,\nthe algorithm steps are as follows:\n1. The sender computes DRi at (say) time Ti and then\ncomputes \u03b4i\nk, and Ri\u22121\nk , \u2200k, 1 \u2264 k \u2264 N based on the estimation\nof delays dtk, \u2200k, 1 \u2264 k \u2264 N as per Equation (1). It\nschedules, DRi to be sent to receiver k at time Ti + \u03b4i\nk.\n2. The DR vectors are sent to the receivers at the scheduled\ntimes which are received after a delay of dak, \u2200k, 1 \u2264 k \u2264\nN where dak \u2264 or > dtk. The receivers send the value of\ndak back to the sender (the receiver can compute this value\nbased on the time stamps on the DR vector as described\nearlier).\n3. The sender computes Ri\nk as described earlier and illustrated\nin Figure 2. The sender also recomputes (using exponential\naveraging method similar to round-trip time estimation by\nTCP [10]) the estimate of delay dtk from the new value of\ndak for receiver k.\n4. Go back to Step 1 to compute DRi+1 when it is required\nand follow the steps of the algorithm to schedule and send\nthis DR vector to the receivers.\n4.6 Handling Cases in Practice\nSo far we implicity assumed that DRi is sent out to all receivers\nbefore a decision is made to compute the next DR vector DRi+1,\nand the receivers send the value of dak corresponding to DRi and\nthis information reaches the sender before it computes DRi+1 so\nthat it can compute Ri+1\nk and then use it in the computation of \u03b4i+1\nk .\nTwo issues need consideration with respect to the above algorithm\nwhen it is used in practice.\n\u2022 It may so happen that a new DR vector is computed even\nbefore the previous DR vector is sent out to all receivers.\nHow will this situation be handled?\n\u2022 What happens if the feedback does not arrive before DRi+1\nis computed and scheduled to be sent?\nLet us consider the first scenario. We assume that DRi has been\nscheduled to be sent and the scheduling instants are such that \u03b4i\n1 <\n\u03b4i\n2 < \u00b7 \u00b7 \u00b7 < \u03b4i\nN . Assume that DRi+1 is to be computed (because\nthe real path has deviated exceeding a threshold from the path\nexported by DRi) at time Ti+1 where Ti + \u03b4i\nk < Ti+1 < Ti + \u03b4i\nk+1.\nThis means, DRi has been sent only to receivers up to k in the\nscheduled order. In our algorithm, in this case, the scheduled delay\nordering queue is flushed which means DRi is not sent to receivers\nstill queued to receive it, but a new scheduling order is computed\nfor all the receivers to send DRi+1.\nFor those receivers who have been sent DRi, assume for now\nthat daj, 1 \u2264 j \u2264 k has been received from all receivers (the\nscenario where daj has not been received will be considered as a part\nof the second scenario later). For these receivers, Ei\nj, 1 \u2264 j \u2264 k\ncan be computed. For those receivers j, k + 1 \u2264 j \u2264 N to\nwhom DRi was not sent Ei\nj does not apply. Consider a receiver\nj, k + 1 \u2264 j \u2264 N to whom DRi was not sent. Refer to\nFigure 3. For such a receiver j, when DRi+1 is to be scheduled and\n6\ntimeTi\nExported path\ndtj\nA\nB\nC\nD\nTi-1\nGi\nj\nDRi+1 computed by sender\nand DRi for receiver k+1 to\nN is removed from queue\nDRi+1 scheduled\nfor receiver k+1\nTi+1\nG\nH\nE\nF\nDRi scheduled\nfor receiver j\nDRi computed\nby sender\nPlaced path\nat receiver k+1\nGi+1\nj\nFigure 3: Schedule computation when DRi is not sent to\nreceiver j, k + 1 \u2264 j \u2264 N.\n\u03b4i+1\nj needs to be computed, the total export error is the accumulated\nrelative export error at time Ti when schedule for DRi was\ncomputed, plus the integral of the distance between the two trajectories\nAC and BD of Figure 3 over the time interval [Ti, Ti+1 + \u03b4i+1\nj +\ndtj]. Note that this integral is given by Err(DRi, Ti, Ti+1) +\nErr(DRi+1, Ti+1, Ti+1 + \u03b4i+1\nj + dtj). Therefore, instead of Ei\nj\nof Equation (1), we use the value Ri\u22121\nj + Err(DRi, Ti, Ti+1) +\nErr(DRi+1, Ti+1, Ti+1 + \u03b4i+1\nj + dtj) where Ri\u22121\nj is relative\nexport error used when the schedule for DRi was computed.\nNow consider the second scenario. Here the feedback dak\ncorresponding to DRi has not arrived before DRi+1 is computed and\nscheduled. In this case, Ri\nk cannot be computed. Thus, in the\ncomputation of \u03b4k for DRi+1, this will be assumed to be zero. We\ndo assume that a reliable mechanism is used to send dak back to\nthe sender. When this information arrives at a later time, Ri\nk will\nbe computed and accumulated to future relative export errors (for\nexample Ri+1\nk if dak is received before DRi+2 is computed) and\nused in the computation of \u03b4k when a future DR vector is to be\nscheduled (for example DRi+2).\n4.7 Experimental Results\nIn order to evaluate the effectiveness and quantify benefits\nobtained through the use of the scheduling algorithm, we implemented\nthe proposed algorithm in BZFlag (Battle Zone Flag) [11] game.\nIt is a first-person shooter game where the players in teams drive\ntanks and move within a battle field. The aim of the players is to\nnavigate and capture flags belonging to the other team and bring\nthem back to their own area. The players shoot each other\"s tanks\nusing shooting bullets. The movement of the tanks as well as that\nof the shots are exchanged among the players using DR vectors.\nWe have modified the implementation of BZFlag to\nincorporate synchronized clocks among the players and the server and\nexchange time-stamps with the DR vector. We set up a testbed with\nfour players running the instrumented version of BZFlag, with one\nas a sender and the rest as receivers. The scheduling approach and\nthe base case where each DR vector was sent to all the receivers\nconcurrently at every trigger point were implemented in the same\nrun by tagging the DR vectors according to the type of approach\nused to send the DR vector. NISTNet [12] was used to introduce\ndelays across the sender and the three receivers. Mean delays of\n800ms, 500ms and 200ms were introduced between the sender and\nfirst, second and the third receiver, respectively. We introduce a\nvariance of 100 msec (to the mean delay of each receiver) to model\nvariability in delay. The sender logged the errors of each receiver\nevery 100 milliseconds for both the scheduling approach and the\nbase case. The sender also calculated the standard deviation and\nthe mean of the accumulated export error of all the receivers every\n100 milliseconds. Figure 4 plots the mean and standard deviation\nof the accumulated export error of all the receivers in the\nscheduling case against the base case. Note that the x-axis of these graphs\n(and the other graphs that follow) represents the system time when\nthe snapshot of the game was taken.\nObserve that the standard deviation of the error with scheduling\nis much lower as compared to the base case. This implies that the\naccumulated errors of the receivers in the scheduling case are closer\nto one another. This shows that the scheduling approach achieves\nfairness among the receivers even if they are at different distances\n(i.e, latencies) from the sender.\nObserve that the mean of the accumulated error increased\nmultifold with scheduling in comparison to the base case. Further\nexploration for the reason for the rise in the mean led to the conclusion\nthat every time the DR vectors are scheduled in a way to equalize\nthe total error, it pushes each receivers total error higher. Also, as\nthe accumulated error has an estimated component, the schedule is\nnot accurate to equalize the errors for the receivers, leading to the\nDR vector reaching earlier or later than the actual schedule. In\neither case, the error is not equalized and if the DR vector reaches\nlate, it actually increases the error for a receiver beyond the highest\naccumulated error. This means that at the next trigger, this receiver\nwill be the one with highest error and every other receiver\"s error\nwill be pushed to this error value. This flip-flop effect leads to\nthe increase in the accumulated error for all the receivers.\nThe scheduling for fairness leads to the decrease in standard\ndeviation (i.e., increases the fairness among different players), but it\ncomes at the cost of higher mean error, which may not be a\ndesirable feature. This led us to explore different ways of equalizing the\naccumulated errors. The approach discussed in the following\nsection is a heuristic approach based on the following idea. Using the\nsame amount of DR vectors over time as in the base case, instead\nof sending the DR vectors to all the receivers at the same frequency\nas in the base case, if we can increase the frequency of sending\nthe DR vectors to the receiver with higher accumulated error and\ndecrease the frequency of sending DR vectors to the receiver with\nlower accumulated error, we can equalize the export error of all\nreceivers over time. At the same time we wish to decrease the\nerror of the receiver with the highest accumulated error in the base\ncase (of course, this receiver would be sent more DR vectors than\nin the base case). We refer to such an algorithm as a budget based\nalgorithm.\n5. BUDGET BASED ALGORITHM\nIn a game, the sender of an entity sends DR vectors to all the\nreceivers every time a threshold is crossed by the entity. Lower\nthe threshold, more DR vectors are generated during a given time\nperiod. Since the DR vectors are sent to all the receivers and the\nnetwork delay between the sender-receiver pairs cannot be avoided,\nthe before export error 3\nwith the most distant player will always\n3\nNote that after export error is eliminated by using synchronized\nclock among the players.\n7\n0\n1000\n2000\n3000\n4000\n5000\n15950 16000 16050 16100 16150 16200 16250 16300\nMeanAccumulatedError\nTime in Seconds\nBase Case\nScheduling Algorithm #1\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n15950 16000 16050 16100 16150 16200 16250 16300\nStandardDeviationofAccumulatedError\nTime in Seconds\nBase Case\nScheduling Algorithm #1\nFigure 4: Mean and standard deviation of error with scheduling and without (i.e., base case).\nbe higher than the rest. In order to mitigate the imbalance in the\nerror, we propose to send DR vectors selectively to different\nplayers based on the accumulated errors of these players. The budget\nbased algorithm is based on this idea and there are two variations\nof it. One is a probabilistic budget based scheme and the other, a\ndeterministic budget base scheme.\n5.1 Probabilistic budget based scheme\nThe probabilistic budget based scheme has three main steps: a)\nlower the dead reckoning threshold but at the same time keep the\ntotal number of DRs sent the same as the base case, b) at every\ntrigger, probabilistically pick a player to send the DR vector to,\nand c) send the DR vector to the chosen player. These steps are\ndescribed below.\nThe lowering of DR threshold is implemented as follows.\nLowering the threshold is equivalent to increasing the number of trigger\npoints where DR vectors are generated. Suppose the threshold is\nsuch that the number of triggers caused by it in the base case is t\nand at each trigger n DR vectors sent by the sender, which results\nin a total of nt DR vectors. Our goal is to keep the total number of\nDR vectors sent by the sender fixed at nt, but lower the number of\nDR vectors sent at each trigger (i.e., do not send the DR vector to\nall the receivers). Let n and t be the number of DR vectors sent\nat each trigger and number of triggers respectively in the modified\ncase. We want to ensure n t = nt. Since we want to increase the\nnumber of trigger points, i.e, t > t, this would mean that n < n.\nThat is, not all receivers will be sent the DR vector at every trigger.\nIn the probabilistic budget based scheme, at each trigger, a\nprobability is calculated for each receiver to be sent a DR vector and\nonly one receiver is sent the DR (n = 1). This probability is based\non the relative weights of the receivers\" accumulated errors. That\nis, a receiver with a higher accumulated error will have a higher\nprobability of being sent the DR vector. Consider that the\naccumulated error for three players are a1, a2 and a3 respectively.\nThen the probability of player 1 receiving the DR vector would\nbe a1\na1+a2+a3\n. Similarly for the other players. Once the player is\npicked, the DR vector is sent to that player.\nTo compare the probabilistic budget based algorithm with the\nbase case, we needed to lower the threshold for the base case (for\nfair comparison). As the dead reckoning threshold in the base\ncase was already very fine, it was decided that instead of\nlowering the threshold, the probabilistic budget based approach would\nbe compared against a modified base case that would use the\nnormal threshold as the budget based algorithm but the base case was\nmodified such that every third trigger would be actually used to\nsend out a DR vector to all the three receivers used in our\nexperiments. This was called as the 1/3 base case as it resulted in 1/3\nnumber of DR vectors being sent as compared to the base case.\nThe budget per trigger for the probability based approach was\ncalculated as one DR vector at each trigger as compared to three DR\nvectors at every third trigger in the 1/3 base case; thus the two cases\nlead to the same number of DR vectors being sent out over time.\nIn order to evaluate the effectiveness of the probabilistic budget\nbased algorithm, we instrumented the BZFlag game to use this\napproach. We used the same testbed consisting of one sender and\nthree receivers with delays of 800ms, 500ms and 200ms from the\nsender and with low delay variance (100ms) and moderate delay\nvariance (180ms). The results are shown in Figures 5 and 6. As\nmentioned earlier, the x-axis of these graphs represents the system\ntime when the snapshot of the game was taken. Observe from the\nfigures that the standard deviation of the accumulated error among\nthe receivers with the probabilistic budget based algorithm is less\nthan the 1/3 base case and the mean is a little higher than the 1/3\nbase case. This implies that the game is fairer as compared to the\n1/3 base case at the cost of increasing the mean error by a small\namount as compared to the 1/3 base case.\nThe increase in mean error in the probabilistic case compared to\nthe 1/3 base case can be attributed to the fact that the even though\nthe probabilistic approach on average sends the same number of\nDR vectors as the 1/3 base case, it sometimes sends DR vectors to\na receiver less frequently and sometimes more frequently than the\n1/3 base case due to its probabilistic nature. When a receiver does\nnot receive a DR vector for a long time, the receiver\"s trajectory\nis more and more off of the sender\"s trajectory and hence the rate\nof buildup of the error at the receiver is higher. At times when\na receiver receives DR vectors more frequently, it builds up error\nat a lower rate but there is no way of reversing the error that was\nbuilt up when it did not receive a DR vector for a long time. This\nleads the receivers to build up more error in the probabilistic case\nas compared to the 1/3 base case where the receivers receive a DR\nvector almost periodically.\n8\n0\n200\n400\n600\n800\n1000\n15950 16000 16050 16100 16150 16200 16250 16300\nMeanAccumulatedError\nTime in Seconds\n1/3 Base Case\nDeterministic Algorithm\nProbabilistic Algorithm\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n500\n15950 16000 16050 16100 16150 16200 16250 16300\nStandardDeviationofAccumulatedError\nTime in Seconds\n1/3 Base Case\nDeterministic Algorithm\nProbabilistic Algorithm\nFigure 5: Mean and standard deviation of error for different algorithms (including budget based algorithms) for low delay variance.\n0\n200\n400\n600\n800\n1000\n16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180\nMeanAccumulatedError\nTime in Seconds\n1/3 Base Case\nDeterministic Algorithm\nProbabilistic Algorithm\n0\n50\n100\n150\n200\n250\n300\n16960 16980 17000 17020 17040 17060 17080 17100 17120 17140 17160 17180\nStandardDeviationofAccumulatedError\nTime in Seconds\n1/3 Base Case\nDeterministic Algorithm\nProbabilistic Algorithm\nFigure 6: Mean and standard deviation of error for different algorithms (including budget based algorithms) for moderate delay\nvariance.\n5.2 Deterministic budget based scheme\nTo bound the increase in mean error we decided to modify the\nbudget based algorithm to be deterministic. The first two steps\nof the algorithm are the same as in the probabilistic algorithm; the\ntrigger points are increased to lower the threshold and accumulated\nerrors are used to compute the probability that a receiver will\nreceiver a DR vector. Once these steps are completed, a deterministic\nschedule for the receiver is computed as follows:\n1. If there is any receiver(s) tagged to receive a DR vector at\nthe current trigger, the sender sends out the DR vector to the\nrespective receiver(s). If at least one receiver was sent a DR\nvector, the sender calculates the probabilities of each receiver\nreceiving a DR vector as explained before and follows steps\n2 to 6, else it does not do anything.\n2. For each receiver, the probability value is multiplied with the\nbudget available at each trigger (which is set to 1 as explained\nbelow) to give the frequency of sending the DR vector to each\nreceiver.\n3. If any of the receiver\"s frequency after multiplying with the\nbudget goes over 1, the receiver\"s frequency is set as 1 and\nthe surplus amount is equally distributed to all the receivers\nby adding the amount to their existing frequencies. This\nprocess is repeated until all the receivers have a frequency of\nless than or equal to 1. This is due to the fact that at a trigger\nwe cannot send more than one DR vector to the respective\nreceiver. That will be wastage of DR vectors by sending\nredundant information.\n4. (1/frequency) gives us the schedule at which the sender should\nsend DR vectors to the respective receiver. Credit obtained\npreviously (explained in step 5) if any is subtracted from the\nschedule. Observe that the resulting value of the schedule\nmight not be an integer; hence, the value is rounded off by\ntaking the ceiling of the schedule. For example, if the\nfrequency is 1/3.5, this implies that we would like to have a DR\nvector sent every 3.5 triggers. However, we are constrained\nto send it at the 4th trigger giving us a credit of 0.5. When we\ndo send the DR vector next time, we would be able to send it\n9\non the 3rd trigger because of the 0.5 credit.\n5. The difference between the schedule and the ceiling of the\nschedule is the credit that the receiver has obtained which\nis remembered for the future and used at the next time as\nexplained in step 4.\n6. For each of those receivers who were sent a DR vector at\nthe current trigger, the receivers are tagged to receive the\nnext DR vector at the trigger that happens exactly schedule\n(the ceiling of the schedule) number of times away from the\ncurrent trigger. Observe that no other receiver\"s schedule is\nmodified at this point as they all are running a schedule\ncalculated at some previous point of time. Those schedules will\nbe automatically modified at the trigger when they are\nscheduled to receive the next DR vector. At the first trigger, the\nsender sends the DR vector to all the receivers and uses a\nrelative probability of 1/n for each receiver and follows the\nsteps 2 to 6 to calculate the next schedule for each receiver\nin the same way as mentioned for other triggers. This\nalgorithm ensures that every receiver has a guaranteed schedule\nof receiving DR vectors and hence there is no irregularity in\nsending the DR vector to any receiver as was observed in the\nbudget based probabilistic algorithm.\nWe used the testbed described earlier (three receivers with\nvarying delays) to evaluate the deterministic algorithm using the budget\nof 1 DR vector per trigger so as to use the same number of DR\nvectors as in the 1/3 base case. Results from our experiments are\nshown in Figures 5 and 6. It can be observed that the standard\ndeviation of error in the deterministic budget based algorithm is less\nthan the 1/3 base case and also has the same mean error as the 1/3\nbase case. This indicates that the deterministic algorithm is more\nfair than the 1/3 base case and at the same time does not increase\nthe mean error thereby leading to a better game quality compared\nto the probabilistic algorithm.\nIn general, when comparing the deterministic approach to the\nprobabilistic approach, we found that the mean accumulated\nerror was always less in the deterministic approach. With respect to\nstandard deviation of the accumulated error, we found that in the\nfixed or low variance cases, the deterministic approach was\ngenerally lower, but in higher variance cases, it was harder to draw\nconclusions as the probabilistic approach was sometimes better than\nthe deterministic approach.\n6. CONCLUSIONS AND FUTURE WORK\nIn distributed multi-player games played across the Internet,\nobject and player trajectory within the game space are exchanged in\nterms of DR vectors. Due to the variable delay between players,\nthese DR vectors reach different players at different times. There is\nunfair advantage gained by receivers who are closer to the sender\nof the DR as they are able to render the sender\"s position more\naccurately in real time. In this paper, we first developed a model\nfor estimating the error in rendering player trajectories at the\nreceivers. We then presented an algorithm based on scheduling the\nDR vectors to be sent to different players at different times thereby\nequalizing the error at different players. This algorithm is aimed\nat making the game fair to all players, but tends to increase the\nmean error of the players. To counter this effect, we presented\nbudget based algorithms where the DR vectors are still\nscheduled to be sent at different players at different times but the\nalgorithm balances the need for fairness with the requirement that the\nerror of the worst case players (who are furthest from the sender)\nare not increased compared to the base case (where all DR vectors\nare sent to all players every time a DR vector is generated). We\npresented two variations of the budget based algorithms and through\nexperimentation showed that the algorithms reduce the standard\ndeviation of the error thereby making the game more fair and at the\nsame time has comparable mean error to the base case.\n7. REFERENCES\n[1] S.Aggarwal, H. Banavar, A. Khandelwal, S. Mukherjee, and\nS. Rangarajan, Accuracy in Dead-Reckoning based\nDistributed Multi-Player Games, Proceedings of ACM\nSIGCOMM 2004 Workshop on Network and System Support\nfor Games (NetGames 2004), Aug. 2004.\n[2] L. Gautier and C. Diot, Design and Evaluation of MiMaze,\na Multiplayer Game on the Internet, in Proc. of IEEE\nMultimedia (ICMCS\"98), 1998.\n[3] M. Mauve, Consistency in Replicated Continuous\nInteractive Media, in Proc. of the ACM Conference on\nComputer Supported Cooperative Work (CSCW\"00), 2000,\npp. 181-190.\n[4] S.K. Singhal and D.R. Cheriton, Exploiting Position\nHistory for Efficient Remote Rendering in Networked\nVirtual Reality, Presence: Teleoperators and Virtual\nEnvironments, vol. 4, no. 2, pp. 169-193, 1995.\n[5] C. Diot and L. Gautier, A Distributed Architecture for\nMultiplayer Interactive Applications on the Internet, in\nIEEE Network Magazine, 1999, vol. 13, pp. 6-15.\n[6] L. Pantel and L.C. Wolf, On the Impact of Delay on\nReal-Time Multiplayer Games, in Proc. of ACM\nNOSSDAV\"02, May 2002.\n[7] Y. Lin, K. Guo, and S. Paul, Sync-MS: Synchronized\nMessaging Service for Real-Time Multi-Player Distributed\nGames, in Proc. of 10th IEEE International Conference on\nNetwork Protocols (ICNP), Nov 2002.\n[8] K. Guo, S. Mukherjee, S. Rangarajan, and S. Paul, A Fair\nMessage Exchange Framework for Distributed Multi-Player\nGames, in Proc. of NetGames2003, May 2003.\n[9] N. E. Baughman and B. N. Levine, Cheat-Proof Playout for\nCentralized and Distributed Online Games, in Proc. of IEEE\nINFOCOM\"01, April 2001.\n[10] M. Allman and V. Paxson, On Estimating End-to-End\nNetwork Path Properties, in Proc. of ACM SIGCOMM\"99,\nSept. 1999.\n[11] BZFlag Forum, BZFlag Game, URL:\nhttp://www.bzflag.org.\n[12] Nation Institute of Standards and Technology, NIST Net,\nURL: http://snad.ncsl.nist.gov/nistnet/.\n10", "keywords": "fairness;dead-reckoning vector;export error;network delay;budget based algorithm;clock synchronization;distribute multi-player game;bucket synchronization;mean error;distributed multi-player game;quantization;dead-reckon;scheduling algorithm;accuracy"} {"name": "train_C-53", "title": "Globally Synchronized Dead-Reckoning with Local Lag for Continuous Distributed Multiplayer Games", "abstract": "Dead-Reckoning (DR) is an effective method to maintain consistency for Continuous Distributed Multiplayer Games (CDMG). Since DR can filter most unnecessary state updates and improve the scalability of a system, it is widely used in commercial CDMG. However, DR cannot maintain high consistency, and this constrains its application in highly interactive games. With the help of global synchronization, DR can achieve higher consistency, but it still cannot eliminate before inconsistency. In this paper, a method named Globally Synchronized DR with Local Lag (GS-DR-LL), which combines local lag and Globally Synchronized DR (GS-DR), is presented. Performance evaluation shows that GS-DR-LL can effectively decrease before inconsistency, and the effects increase with the lag.", "fulltext": "1. INTRODUCTION\nNowadays, many distributed multiplayer games adopt replicated\narchitectures. In such games, the states of entities are changed not\nonly by the operations of players, but also by the passing of time\n[1, 2]. These games are referred to as Continuous Distributed\nMultiplayer Games (CDMG). Like other distributed applications,\nCDMG also suffer from the consistency problem caused by\nnetwork transmission delay. Although new network techniques\n(e.g. QoS) can reduce or at least bound the delay, they can not\ncompletely eliminate it, as there exists the physical speed\nlimitation of light, for instance, 100 ms is needed for light to\npropagate from Europe to Australia [3]. There are many studies\nabout the effects of network transmission delay in different\napplications [4, 5, 6, 7]. In replication based games, network\ntransmission delay makes the states of local and remote sites to be\ninconsistent, which can cause serious problems, such as reducing\nthe fairness of a game and leading to paradoxical situations etc. In\norder to maintain consistency for distributed systems, many\ndifferent approaches have been proposed, among which local lag\nand Dead-Reckoning (DR) are two representative approaches.\nMauve et al [1] proposed local lag to maintain high consistency\nfor replicated continuous applications. It synchronizes the\nphysical clocks of all sites in a system. After an operation is\nissued at local site, it delays the execution of the operation for a\nshort time. During this short time period the operation is\ntransmitted to remote sites, and all sites try to execute the\noperation at a same physical time. In order to tackle the\ninconsistency caused by exceptional network transmission delay,\na time warp based mechanism is proposed to repair the state.\nLocal lag can achieve significant high consistency, but it is based\non operation transmission, which forwards every operation on a\nshared entity to remote sites. Since operation transmission\nmechanism requests that all operations should be transmitted in a\nreliable way, message filtering is difficult to be deployed and the\nscalability of a system is limited.\nDR is based on state transmission mechanism. In addition to the\nhigh fidelity model that maintains the accurate states of its own\nentities, each site also has a DR model that estimates the states of\nall entities (including its own entities). After each update of its\nown entities, a site compares the accurate state with the estimated\none. If the difference exceeds a pre-defined threshold, a state\nupdate would be transmitted to all sites and all DR models would\nbe corrected. Through state estimation, DR can not only maintain\nconsistency but also decrease the number of transmitted state\nupdates. Compared with aforementioned local lag, DR cannot\nmaintain high consistency. Due to network transmission delay,\nwhen a remote site receives a state update of an entity the state of\nthe entity might have changed at the site sending the state update.\nIn order to make DR maintain high consistency, Aggarwal et al [8]\nproposed Globally Synchronized DR (GS-DR), which\nsynchronizes the physical clocks of all sites in a system and adds\ntime stamps to transmitted state updates. Detailed description of\nGS-DR can be found in Section 3.\nWhen a state update is available, GS-DR immediately updates the\nstate of local site and then transmits the state update to remote\nsites, which causes the states of local site and remote sites to be\ninconsistent in the transmission procedure. Thus with the\nsynchronization of physical clocks, GS-DR can eliminate after\ninconsistency, but it cannot tackle before inconsistency [8]. In this\npaper, we propose a new method named globally synchronized\nDR with Local Lag (GS-DR-LL), which combines local lag and\nGS-DR. By delaying the update to local site, GS-DR-LL can\nachieve higher consistency than GS-DR. The rest of this paper is\norganized as follows: Section 2 gives the definition of consistency\nand corresponding metrics; the cause of the inconsistency of DR\nis analyzed in Section 3; Section 4 describes how GS-DR-LL\nworks; performance evaluation is presented in Section 5; Section\n6 concludes the paper.\n2. CONSISTENCY DEFINITIONS AND\nMETRICS\nThe consistency of replicated applications has already been well\ndefined in discrete domain [9, 10, 11, 12], but few related work\nhas been done in continuous domain. Mauve et al [1] have given a\ndefinition of consistency for replicated applications in continuous\ndomain, but the definition is based on operation transmission and\nit is difficult for the definition to describe state transmission based\nmethods (e.g. DR). Here, we present an alternative definition of\nconsistency in continuous domain, which suits state transmission\nbased methods well.\nGiven two distinct sites i and j, which have replicated a shared\nentity e, at a given time t, the states of e at sites i and j are Si(t)\nand Sj(t).\nDEFINITION 1: the states of e at sites i and j are consistent at\ntime t, iff:\nDe(i, j, t) = |Si(t) - Sj(t)| = 0 (1)\nDEFINITION 2: the states of e at sites i and j are consistent\nbetween time t1 and t2 (t1 < t2), iff:\nDe(i, j, t1, t2) = dt|)t(S)t(S|\nt2\nt1\nji = 0 (2)\nIn this paper, formulas (1) and (2) are used to determine whether\nthe states of shared entities are consistent between local and\nremote sites. Due to network transmission delay, it is difficult to\nmaintain the states of shared entities absolutely consistent.\nCorresponding metrics are needed to measure the consistency of\nshared entities between local and remote sites.\nDe(i, j, t) can be used as a metric to measure the degree of\nconsistency at a certain time point. If De(i, j, t1) > De(i, j, t2), it\ncan be stated that between sites i and j, the consistency of the\nstates of entity e at time point t1 is lower than that at time point t2.\nIf De(i, j, t) > De(l, k, t), it can be stated that, at time point t, the\nconsistency of the states of entity e between sites i and j is lower\nthan that between sites l and k.\nSimilarly, De(i, j, t1, t2) can been used as a metric to measure the\ndegree of consistency in a certain time period. If De(i, j, t1, t2) >\nDe(i, j, t3, t4) and |t1 - t2| = |t3 - t4|, it can be stated that between\nsites i and j, the consistency of the states of entity e between time\npoints t1 and t2 is lower than that between time points t3 and t4. If\nDe(i, j, t1, t2) > De(l, k, t1, t2), it can be stated that between time\npoints t1 and t2, the consistency of the states of entity e between\nsites i and j is lower than that between sites l and k.\nIn DR, the states of entities are composed of the positions and\norientations of entities and some prediction related parameters\n(e.g. the velocities of entities). Given two distinct sites i and j,\nwhich have replicated a shared entity e, at a given time point t, the\npositions of e at sites i and j are (xit, yit, zit) and (xjt, yjt, zjt), De(i, j,\nt) and D (i, j, t1, t2) could be calculated as:\nDe(i, j, t) = )zz()yy()xx( jtit\n2\njtit\n2\njtit\n2\n(3)\nDe(i, j, t1, t2)\n= dt)zz()yy()xx(\n2t\n1t jtit\n2\njtit\n2\njtit\n2\n(4)\nIn this paper, formulas (3) and (4) are used as metrics to measure\nthe consistency of shared entities between local and remote sites.\n3. INCONSISTENCY IN DR\nThe inconsistency in DR can be divided into two sections by the\ntime point when a remote site receives a state update. The\ninconsistency before a remote site receives a state update is\nreferred to as before inconsistency, and the inconsistency after a\nremote site receives a state update is referred to as after\ninconsistency. Before inconsistency and after inconsistency are\nsimilar with the terms before export error and after export error\n[8].\nAfter inconsistency is caused by the lack of synchronization\nbetween the physical clocks of all sites in a system. By employing\nphysical clock synchronization, GS-DR can accurately calculate\nthe states of shared entities after receiving state updates, and it\ncan eliminate after inconsistency. Before inconsistency is caused\nby two reasons. The first reason is the delay of sending state\nupdates, as local site does not send a state update unless the\ndifference between accurate state and the estimated one is larger\nthan a predefined threshold. The second reason is network\ntransmission delay, as a shared entity can be synchronized only\nafter remote sites receiving corresponding state update.\nFigure 1. The paths of a shared entity by using GS-DR.\nFor example, it is assumed that the velocity of a shared entity is\nthe only parameter to predict the entity\"s position, and current\nposition of the entity can be calculated by its last position and\ncurrent velocity. To simplify the description, it is also assumed\nthat there are only two sites i and j in a game session, site i acts as\n2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\nlocal site and site j acts as remote site, and t1 is the time point the\nlocal site updates the state of the shared entity. Figure 1 illustrates\nthe paths of the shared entity at local site and remote site in x axis\nby using GS-DR. At the beginning, the positions of the shared\nentity are the same at sites i and j and the velocity of the shared\nentity is 0. Before time point t0, the paths of the shared entity at\nsites i and j in x coordinate are exactly the same. At time point t0,\nthe player at site i issues an operation, which changes the velocity\nin x axis to v0. Site i first periodically checks whether the\ndifference between the accurate position of the shared entity and\nthe estimated one, 0 in this case, is larger than a predefined\nthreshold. At time point t1, site i finds that the difference is larger\nthan the threshold and it sends a state update to site j. The state\nupdate contains the position and velocity of the shared entity at\ntime point t1 and time point t1 is also attached as a timestamp. At\ntime point t2, the state update reaches site j, and the received state\nand the time deviation between time points t1 and t2 are used to\ncalculate the current position of the shared entity. Then site j\nupdates its replicated entity\"s position and velocity, and the paths\nof the shared entity at sites i and j overlap again.\nFrom Figure 1, it can be seen that the after inconsistency is 0, and\nthe before consistency is composed of two parts, D1 and D2. D1\nis De(i, j, t0, t1) and it is caused by the state filtering mechanism\nof DR. D2 is De(i, j, t1, t2) and it is caused by network\ntransmission delay.\n4. GLOBALLY SYNCHRONIZED DR\nWITH LOCAL LAG\nFrom the analysis in Section 3, It can be seen that GS-DR can\neliminate after inconsistency, but it cannot effectively tackle\nbefore inconsistency. In order to decrease before inconsistency,\nwe propose GS-DR-LL, which combines GS-DR with local lag\nand can effectively decrease before inconsistency.\nIn GS-DR-LL, the state of a shared entity at a certain time point t\nis notated as S = (t, pos, par 1, par 2, \u2026\u2026, par n), in which pos\nmeans the position of the entity and par 1 to par n means the\nparameters to calculate the position of the entity. In order to\nsimplify the description of GS-DR-LL, it is assumed that there are\nonly one shared entity and one remote site.\nAt the beginning of a game session, the states of the shared entity\nare the same at local and remote sites, with the same position p0\nand parameters pars0 (pars represents all the parameters). Local\nsite keeps three states: the real state of the entity Sreal, the\npredicted state at remote site Sp-remote, and the latest state updated\nto remote site Slate. Remote site keep only one state Sremote, which\nis the real state of the entity at remote site. Therefore, at the\nbeginning of a game session Sreal = Sp-remote = Slate = Sremote = (t0,\np0, pars0). In GS-DR-LL, it is assumed that the physical clocks of\nall sites are synchronized with a deviation of less than 50 ms\n(using NTP or GPS clock). Furthermore, it is necessary to make\ncorrections to a physical clock in a way that does not result in\ndecreasing the value of the clock, for example by slowing down\nor halting the clock for a period of time. Additionally it is\nassumed that the game scene is updated at a fixed frequency and\nT stands for the time interval between two consecutive updates,\nfor example, if the scene update frequency is 50 Hz, T would be\n20 ms. n stands for the lag value used by local lag, and t stands for\ncurrent physical time.\nAfter updating the scene, local site waits for a constant amount of\ntime T. During this time period, local site receives the operations\nof the player and stores them in a list L. All operations in L are\nsorted by their issue time. At the end of time period T, local site\nexecutes all stored operations, whose issue time is between t - T\nand t, on Slate to get the new Slate, and it also executes all stored\noperations, whose issue time is between t - (n + T) and t - n, on\nSreal to get the new Sreal. Additionally, local site uses Sp-remote and\ncorresponding prediction methods to estimate the new Sp-remote.\nAfter new Slate, Sreal, and Sp-remote are calculated, local site\ncompares whether the difference between the new Slate and\nSpremote exceeds the predefined threshold. If YES, local site sends\nnew Slate to remote site and Sp-remote is updated with new Slate. Note\nthat the timestamp of the sent state update is t. After that, local\nsite uses Sreal to update local scene and deletes the operations,\nwhose issue time is less than t - n, from L.\nAfter updating the scene, remote site waits for a constant amount\nof time T. During this time period, remote site stores received\nstate update(s) in a list R. All state updates in R are sorted by their\ntimestamps. At the end of time period T, remote site checks\nwhether R contains state updates whose timestamps are less than t\n- n. Note that t is current physical time and it increases during the\ntransmission of state updates. If YES, it uses these state updates\nand corresponding prediction methods to calculate the new Sremote,\nelse they use Sremote and corresponding prediction methods to\nestimate the new Sremote. After that, local site uses Sremote to update\nlocal scene and deletes the sate updates, whose timestamps are\nless than t - n, from R.\nFrom the above description, it can been see that the main\ndifference between GS-DR and GS-DR-LL is that GS-DR-LL\nuses the operations, whose issue time is less than t - n, to\ncalculate Sreal. That means that the scene seen by local player is\nthe results of the operations issued a period of time (i.e. n) ago.\nMeanwhile, if the results of issued operations make the difference\nbetween Slate and Sp-remote exceed a predefined threshold,\ncorresponding state updates are sent to remote sites immediately.\nThe aforementioned is the basic mechanism of GS-DR-LL. In the\ncase with multiple shared entities and remote sites, local site\ncalculates Slate, Sreal, and Sp-remote for different shared entities\nrespectively, if there are multiple Slate need to be transmitted, local\nsite packets them in one state update and then send it to all remote\nsites.\nFigure 2 illustrates the paths of a shared entity at local site and\nremote site while using GS-DR and GS-DR-LL. All conditions\nare the same with the conditions used in the aforementioned\nexample describing GS-DR. Compared with t1, t2, and n, T (i.e.\nthe time interval between two consecutive updates) is quite small\nand it is ignored in the following description.\nAt time point t0, the player at site i issues an operation, which\nchanges the velocity of the shared entity form 0 to v0. By using\nGS-DR-LL, the results of the operation are updated to local scene\nat time point t0 + n. However the operation is immediately used\nto calculate Slate, thus in spite of GS-DR or GS-DR-LL, at time\npoint t1 site i finds that the difference between accurate position\nand the estimated one is larger than the threshold and it sends a\nstate update to site j. At time point t2, the state update is received\nby remote site j. Assuming that the timestamp of the state update\nis less than t - n, site j uses it to update local scene immediately.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3\nWith GS-DR, the time period of before inconsistency is (t2 - t1) +\n(t1 - t0), whereas it decreases to (t2 - t1 - n) + (t1 - t0) with the\nhelp of GS-DR-LL. Note that t2 - t1 is caused by network\ntransmission delay and t1 - t0 is caused by the state filtering\nmechanism of DR. If n is larger than t2 - t1, GS-DR-LL can\neliminate the before inconsistency caused by network\ntransmission delay, but it cannot eliminate the before\ninconsistency caused by the state filtering mechanism of DR\n(unless the threshold is set to 0). In highly interactive games,\nwhich request high consistency and GS-DR-LL might be\nemployed, the results of operations are quite difficult to be\nestimated and a small threshold must be used. Thus, in practice,\nmost before inconsistency is caused by network transmission\ndelay and GS-DR-LL has the capability to eliminate such before\ninconsistency.\nFigure 2. The paths of a shared entity by using GS-DR and\nGS-DR-LL.\nTo GS-DR-LL, the selection of lag value n is very important, and\nboth network transmission delay and the effects of local lag on\ninteraction should be considered. According to the results of HCI\nrelated researches, humans cannot perceive the delay imposed on\na system when it is smaller than a specific value, and the specific\nvalue depends on both the system and the task. For example, in a\ngraphical user interface a delay of approximately 150 ms cannot\nbe noticed for keyboard interaction and the threshold increases to\n195 ms for mouse interaction [13], and a delay of up to 50 ms is\nuncritical for a car-racing game [5]. Thus if network transmission\ndelay is less than the specific value of a game system, n can be set\nto the specific value. Else n can be set in terms of the effects of\nlocal lag on the interaction of a system [14]. In the case that a\nlarge n must be used, some HCI methods (e.g. echo [15]) can be\nused to relieve the negative effects of the large lag. In the case\nthat n is larger than the network transmission delay, GS-DR-LL\ncan eliminate most before inconsistency. Traditional local lag\nrequests that the lag value must be larger than typical network\ntransmission delay, otherwise state repairs would flood the system.\nHowever GS-DR-LL allows n to be smaller than typical network\ntransmission delay. In this case, the before inconsistency caused\nby network transmission delay still exists, but it can be decreased.\n5. PERFORMANCE EVALUATION\nIn order to evaluate GS-DR-LL and compare it with GS-DR in a\nreal application, we had implemented both two methods in a\nnetworked game named spaceship [1]. Spaceship is a very simple\nnetworked computer game, in which players can control their\nspaceships to accelerate, decelerate, turn, and shoot spaceships\ncontrolled by remote players with laser beams. If a spaceship is\nhit by a laser beam, its life points decrease one. If the life points\nof a spaceship decrease to 0, the spaceship is removed from the\ngame and the player controlling the spaceship loses the game.\nIn our practical implementation, GS-DR-LL and GS-DR\ncoexisted in the game system, and the test bed was composed of\ntwo computers connected by 100 M switched Ethernet, with one\ncomputer acted as local site and the other acted as remote site. In\norder to simulate network transmission delay, a specific module\nwas developed to delay all packets transmitted between the two\ncomputers in terms of a predefined delay value.\nThe main purpose of performance evaluation is to study the\neffects of GS-DR-LL on decreasing before inconsistency in a\nparticular game system under different thresholds, lags, and\nnetwork transmission delays. Two different thresholds were used\nin the evaluation, one is 10 pixels deviation in position or 15\ndegrees deviation in orientation, and the other is 4 pixels or 5\ndegrees. Six different combinations of lag and network\ntransmission delay were used in the evaluation and they could be\ndivided into two categories. In one category, the lag was fixed at\n300 ms and three different network transmission delays (100 ms,\n300 ms, and 500 ms) were used. In the other category, the\nnetwork transmission delay was fixed at 800 ms and three\ndifferent lags (100 ms, 300 ms, and 500 ms) were used. Therefore\nthe total number of settings used in the evaluation was 12 (2 \u00d7 6).\nThe procedure of performance evaluation was composed of three\nsteps. In the first step, two participants were employed to play the\ngame, and the operation sequences were recorded. Based on the\nrecords, a sub operation sequence, which lasted about one minute\nand included different operations (e.g. accelerate, decelerate, and\nturn), was selected. In the second step, the physical clocks of the\ntwo computers were synchronized first. Under different settings\nand consistency maintenance approaches, the selected sub\noperation sequence was played back on one computer, and it\ndrove the two spaceships, one was local and the other was remote,\nto move. Meanwhile, the tracks of the spaceships on the two\ncomputers were recorded separately and they were called as a\ntrack couple. Since there are 12 settings and 2 consistency\nmaintenance approaches, the total number of recorded track\ncouples was 24. In the last step, to each track couple, the\ninconsistency between them was calculated, and the unit of\ninconsistency was pixel. Since the physical clocks of the two\ncomputers were synchronized, the calculation of inconsistency\nwas quite simple. The inconsistency at a particular time point was\nthe distance between the positions of the two spaceships at that\ntime point (i.e. formula (3)).\nIn order to show the results of inconsistency in a clear way, only\nparts of the results, which last about 7 seconds, are used in the\nfollowing figures, and the figures show almost the same parts of\nthe results. Figures 3, 4, and 5 show the results of inconsistency\nwhen the lag is fixed at 300 ms and the network transmission\ndelays are 100, 300, and 500 ms. It can been seen that\ninconsistency does exist, but in most of the time it is 0.\nAdditionally, inconsistency increases with the network\ntransmission delay, but decreases with the threshold. Compared\nwith GS-DR, GS-DR-LL can decrease more inconsistency, and it\neliminates most inconsistency when the network transmission\ndelay is 100 ms and the threshold is 4 pixels or 5 degrees.\n4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\nAccording to the prediction and state filtering mechanisms of DR,\ninconsistency cannot be completely eliminated if the threshold is\nnot 0. With the definitions of before inconsistency and after\ninconsistency, it can be indicated that GS-DR and GS-DR-LL\nboth can eliminate after inconsistency, and GS-DR-LL can\neffectively decrease before inconsistency. It can be foreseen that\nwith proper lag and threshold (e.g. the lag is larger than the\nnetwork transmission delay and the threshold is 0), GS-DR-LL\neven can eliminate before inconsistency.\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 10 pixels or 15degrees\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 4 pixels or 5degrees\nFigure 3. Inconsistency when the network transmission delay is 100 ms and the lag is 300 ms.\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 10 pixels or 15degrees\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels) GS-DR-LL GS-DR\nThe threshold is 4 pixels or 5degrees\nFigure 4. Inconsistency when the network transmission delay is 300 ms and the lag is 300 ms.\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.2\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 10 pixels or 15degrees\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 4 pixels or 5degrees\nFigure 5. Inconsistency when the network transmission delay is 500 ms and the lag is 300 ms.\nFigures 6, 7, and 8 show the results of inconsistency when the\nnetwork transmission delay is fixed at 800 ms and the lag are 100,\n300, and 500 ms. It can be seen that with GS-DR-LL before\ninconsistency decreases with the lag. In traditional local lag, the\nlag must be set to a value larger than typical network transmission\ndelay, otherwise the state repairs would flood the system. From\nthe above results it can be seen that there does not exist any\nconstraint on the selection of the lag, with GS-DR-LL a system\nwould work fine even if the lag is much smaller than the network\ntransmission delay.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5\nFrom all above results, it can be indicated that GS-DR and\nGSDR-LL both can eliminate after inconsistency, and GS-DR-LL\ncan effectively decrease before inconsistency, and the effects\nincrease with the lag.\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.7 6.2\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 10 pixels or 15degrees\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 4 pixels or 5degrees\nFigure 6. Inconsistency when the network transmission delay is 800 ms and the lag is 100 ms.\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 10 pixels or 15degrees\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 4 pixels or 5degrees\nFigure 7. Inconsistency when the network transmission delay is 800 ms and the lag is 300 ms.\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 10 pixels or 15degrees\n0\n10\n20\n30\n40\n0.0 1.5 3.1 4.6 6.1\nTime (seconds)\nInconsistency(pixels)\nGS-DR-LL GS-DR\nThe threshold is 4 pixels or 5degrees\nFigure 8. Inconsistency when the network transmission delay is 800 ms and the lag is 500 ms.\n6. CONCLUSIONS\nCompared with traditional DR, GS-DR can eliminate after\ninconsistency through the synchronization of physical clocks, but\nit cannot tackle before inconsistency, which would significantly\ninfluence the usability and fairness of a game. In this paper, we\nproposed a method named GS-DR-LL, which combines local lag\nand GS-DR, to decrease before inconsistency through delaying\nupdating the execution results of local operations to local scene.\nPerformance evaluation indicates that GS-DR-LL can effectively\ndecrease before inconsistency, and the effects increase with the\nlag.\nGS-DR-LL has significant implications to consistency\nmaintenance approaches. First, GS-DR-LL shows that improved\nDR can not only eliminate after inconsistency but also decrease\n6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\nbefore inconsistency, with proper lag and threshold, it would even\neliminate before inconsistency. As a result, the application of DR\ncan be greatly broadened and it could be used in the systems\nwhich request high consistency (e.g. highly interactive games).\nSecond, GS-DR-LL shows that by combining local lag and\nGSDR, the constraint on selecting lag value is removed and a lag,\nwhich is smaller than typical network transmission delay, could\nbe used. As a result, the application of local lag can be greatly\nbroadened and it could be used in the systems which have large\ntypical network transmission delay (e.g. Internet based games).\n7. REFERENCES\n[1] Mauve, M., Vogel, J., Hilt, V., and Effelsberg, W. Local-Lag\nand Timewarp: Providing Consistency for Replicated\nContinuous Applications. IEEE Transactions on Multimedia,\nVol. 6, No.1, 2004, 47-57.\n[2] Li, F.W., Li, L.W., and Lau, R.W. Supporting Continuous\nConsistency in Multiplayer Online Games. In Proc. of ACM\nMultimedia, 2004, 388-391.\n[3] Pantel, L. and Wolf, L. On the Suitability of Dead\nReckoning Schemes for Games. In Proc. of NetGames, 2002,\n79-84.\n[4] Alhalabi, M.O., Horiguchi, S., and Kunifuji, S. An\nExperimental Study on the Effects of Network Delay in\nCooperative Shared Haptic Virtual Environment. Computers\nand Graphics, Vol. 27, No. 2, 2003, 205-213.\n[5] Pantel, L. and Wolf, L.C. On the Impact of Delay on\nRealTime Multiplayer Games. In Proc. of NOSSDAV, 2002,\n2329.\n[6] Meehan, M., Razzaque, S., Whitton, M.C., and Brooks, F.P.\nEffect of Latency on Presence in Stressful Virtual\nEnvironments. In Proc. of IEEE VR, 2003, 141-148.\n[7] Bernier, Y.W. Latency Compensation Methods in\nClient/Server In-Game Protocol Design and Optimization. In\nProc. of Game Developers Conference, 2001.\n[8] Aggarwal, S., Banavar, H., and Khandelwal, A. Accuracy in\nDead-Reckoning based Distributed Multi-Player Games. In\nProc. of NetGames, 2004, 161-165.\n[9] Raynal, M. and Schiper, A. From Causal Consistency to\nSequential Consistency in Shared Memory Systems. In Proc.\nof Conference on Foundations of Software Technology and\nTheoretical Computer Science, 1995, 180-194.\n[10] Ahamad, M., Burns, J.E., Hutto, P.W., and Neiger, G. Causal\nMemory. In Proc. of International Workshop on Distributed\nAlgorithms, 1991, 9-30.\n[11] Herlihy, M. and Wing, J. Linearizability: a Correctness\nCondition for Concurrent Objects. ACM Transactions on\nProgramming Languages and Systems, Vol. 12, No. 3, 1990,\n463-492.\n[12] Misra, J. Axioms for Memory Access in Asynchronous\nHardware Systems. ACM Transactions on Programming\nLanguages and Systems, Vol. 8, No. 1, 1986, 142-153.\n[13] Dabrowski, J.R. and Munson, E.V. Is 100 Milliseconds too\nFast. In Proc. of SIGCHI Conference on Human Factors in\nComputing Systems, 2001, 317-318.\n[14] Chen, H., Chen, L., and Chen, G.C. Effects of Local-Lag\nMechanism on Cooperation Performance in a Desktop CVE\nSystem. Journal of Computer Science and Technology, Vol.\n20, No. 3, 2005, 396-401.\n[15] Chen, L., Chen, H., and Chen, G.C. Echo: a Method to\nImprove the Interaction Quality of CVEs. In Proc. of IEEE\nVR, 2005, 269-270.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7", "keywords": "local lag;physical clock;time warp;usability and fairness;continuous replicate application;network transmission delay;distribute multi-player game;accurate state;gs-dr-ll;dead-reckon;multiplayer game;consistency;correction"} {"name": "train_C-54", "title": "Remote Access to Large Spatial Databases", "abstract": "Enterprises in the public and private sectors have been making their large spatial data archives available over the Internet. However, interactive work with such large volumes of online spatial data is a challenging task. We propose two efficient approaches to remote access to large spatial data. First, we introduce a client-server architecture where the work is distributed between the server and the individual clients for spatial query evaluation, data visualization, and data management. We enable the minimization of the requirements for system resources on the client side while maximizing system responsiveness as well as the number of connections one server can handle concurrently. Second, for prolonged periods of access to large online data, we introduce APPOINT (an Approach for Peer-to-Peer O\ufb04oading the INTernet). This is a centralized peer-to-peer approach that helps Internet users transfer large volumes of online data efficiently. In APPOINT, active clients of the clientserver architecture act on the server\"s behalf and communicate with each other to decrease network latency, improve service bandwidth, and resolve server congestions.", "fulltext": "1. INTRODUCTION\nIn recent years, enterprises in the public and private\nsectors have provided access to large volumes of spatial data\nover the Internet. Interactive work with such large volumes\nof online spatial data is a challenging task. We have been\ndeveloping an interactive browser for accessing spatial online\ndatabases: the SAND (Spatial and Non-spatial Data)\nInternet Browser. Users of this browser can interactively and\nvisually manipulate spatial data remotely. Unfortunately,\ninteractive remote access to spatial data slows to a crawl\nwithout proper data access mechanisms. We developed two\nseparate methods for improving the system performance,\ntogether, form a dynamic network infrastructure that is highly\nscalable and provides a satisfactory user experience for\ninteractions with large volumes of online spatial data.\nThe core functionality responsible for the actual database\noperations is performed by the server-based SAND system.\nSAND is a spatial database system developed at the\nUniversity of Maryland [12]. The client-side SAND Internet\nBrowser provides a graphical user interface to the facilities\nof SAND over the Internet. Users specify queries by\nchoosing the desired selection conditions from a variety of menus\nand dialog boxes.\nSAND Internet Browser is Java-based, which makes it\ndeployable across many platforms. In addition, since Java has\noften been installed on target computers beforehand, our\nclients can be deployed on these systems with little or no\nneed for any additional software installation or\ncustomization. The system can start being utilized immediately\nwithout any prior setup which can be extremely beneficial in\ntime-sensitive usage scenarios such as emergencies.\nThere are two ways to deploy SAND. First, any standard\nWeb browser can be used to retrieve and run the client piece\n(SAND Internet Browser) as a Java application or an applet.\nThis way, users across various platforms can continuously\naccess large spatial data on a remote location with little or\n15\nno need for any preceding software installation. The second\noption is to use a stand-alone SAND Internet Browser along\nwith a locally-installed Internet-enabled database\nmanagement system (server piece). In this case, the SAND Internet\nBrowser can still be utilized to view data from remote\nlocations. However, frequently accessed data can be downloaded\nto the local database on demand, and subsequently accessed\nlocally. Power users can also upload large volumes of spatial\ndata back to the remote server using this enhanced client.\nWe focused our efforts in two directions. We first aimed at\ndeveloping a client-server architecture with efficient caching\nmethods to balance local resources on one side and the\nsignificant latency of the network connection on the other. The\nlow bandwidth of this connection is the primary concern in\nboth cases. The outcome of this research primarily addresses\nthe issues of our first type of usage (i.e., as a remote browser\napplication or an applet) for our browser and other similar\napplications. The second direction aims at helping users\nthat wish to manipulate large volumes of online data for\nprolonged periods. We have developed a centralized\npeerto-peer approach to provide the users with the ability to\ntransfer large volumes of data (i.e., whole data sets to the\nlocal database) more efficiently by better utilizing the\ndistributed network resources among active clients of a\nclientserver architecture. We call this architecture\nAPPOINTApproach for Peer-to-Peer O\ufb04oading the INTernet. The\nresults of this research addresses primarily the issues of the\nsecond type of usage for our SAND Internet Browser (i.e.,\nas a stand-alone application).\nThe rest of this paper is organized as follows. Section 2\ndescribes our client-server approach in more detail. Section 3\nfocuses on APPOINT, our peer-to-peer approach. Section 4\ndiscusses our work in relation to existing work. Section 5\noutlines a sample SAND Internet Browser scenario for both\nof our remote access approaches. Section 6 contains\nconcluding remarks as well as future research directions.\n2. THE CLIENT-SERVER APPROACH\nTraditionally, Geographic Information Systems (GIS)\nsuch as ArcInfo from ESRI [2] and many spatial databases\nare designed to be stand-alone products. The spatial\ndatabase is kept on the same computer or local area network\nfrom where it is visualized and queried. This architecture\nallows for instantaneous transfer of large amounts of data\nbetween the spatial database and the visualization module\nso that it is perfectly reasonable to use large-bandwidth\nprotocols for communication between them. There are however\nmany applications where a more distributed approach is\ndesirable. In these cases, the database is maintained in one\nlocation while users need to work with it from possibly distant\nsites over the network (e.g., the Internet). These connections\ncan be far slower and less reliable than local area networks\nand thus it is desirable to limit the data flow between the\ndatabase (server) and the visualization unit (client) in order\nto get a timely response from the system.\nOur client-server approach (Figure 1) allows the actual\ndatabase engine to be run in a central location maintained\nby spatial database experts, while end users acquire a\nJavabased client component that provides them with a gateway\ninto the SAND spatial database engine.\nOur client is more than a simple image viewer. Instead, it\noperates on vector data allowing the client to execute many\noperations such as zooming or locational queries locally. In\nFigure 1: SAND Internet Browser - Client-Server\narchitecture.\nessence, a simple spatial database engine is run on the client.\nThis database keeps a copy of a subset of the whole database\nwhose full version is maintained on the server. This is a\nconcept similar to \u2018caching\". In our case, the client acts as\na lightweight server in that given data, it evaluates queries\nand provides the visualization module with objects to be\ndisplayed. It initiates communication with the server only\nin cases where it does not have enough data stored locally.\nSince the locally run database is only updated when\nadditional or newer data is needed, our architecture allows the\nsystem to minimize the network traffic between the client\nand the server when executing the most common user-side\noperations such as zooming and panning. In fact, as long\nas the user explores one region at a time (i.e., he or she is\nnot panning all over the database), no additional data needs\nto be retrieved after the initial population of the client-side\ndatabase. This makes the system much more responsive\nthan the Web mapping services. Due to the complexity of\nevaluating arbitrary queries (i.e., more complex queries than\nwindow queries that are needed for database visualization),\nwe do not perform user-specified queries on the client. All\nuser queries are still evaluated on the server side and the\nresults are downloaded onto the client for display. However,\nassuming that the queries are selective enough (i.e., there are\nfar fewer elements returned from the query than the number\nof elements in the database), the response delay is usually\nwithin reasonable limits.\n2.1 Client-Server Communication\nAs mentioned above, the SAND Internet Browser is a\nclient piece of the remotely accessible spatial database server\nbuilt around the SAND kernel. In order to communicate\nwith the server, whose application programming interface\n(API) is a Tcl-based scripting language, a servlet specifically\ndesigned to interface the SAND Internet Browser with the\nSAND kernel is required on the server side. This servlet\nlistens on a given port of the server for incoming requests from\nthe client. It translates these requests into the SAND-Tcl\nlanguage. Next, it transmits these SAND-Tcl commands or\nscripts to the SAND kernel. After results are provided by\nthe kernel, the servlet fetches and processes them, and then\nsends those results back to the originating client.\nOnce the Java servlet is launched, it waits for a client to\ninitiate a connection. It handles both requests for the actual\nclient Java code (needed when the client is run as an applet)\nand the SAND traffic. When the client piece is launched,\nit connects back to the SAND servlet, the communication\nis driven by the client piece; the server only responds to\nthe client\"s queries. The client initiates a transaction by\n6\nsending a query. The Java servlet parses the query and\ncreates a corresponding SAND-Tcl expression or script in\nthe SAND kernel\"s native format. It is then sent to the\nkernel for evaluation or execution. The kernel\"s response\nnaturally depends on the query and can be a boolean value,\na number or a string representing a value (e.g., a default\ncolor) or, a whole tuple (e.g., in response to a nearest tuple\nquery). If a script was sent to the kernel (e.g., requesting\nall the tuples matching some criteria), then an arbitrary\namount of data can be returned by the SAND server. In this\ncase, the data is first compressed before it is sent over the\nnetwork to the client. The data stream gets decompressed\nat the client before the results are parsed.\nNotice, that if another spatial database was to be used\ninstead of the SAND kernel, then only a simple\nmodification to the servlet would need to be made in order for the\nSAND Internet Browser to function properly. In\nparticular, the queries sent by the client would need to be recoded\ninto another query language which is native to this different\nspatial database. The format of the protocol used for\ncommunication between the servlet and the client is unaffected.\n3. THE PEER-TO-PEER APPROACH\nMany users may want to work on a complete spatial data\nset for a prolonged period of time. In this case, making an\ninitial investment of downloading the whole data set may be\nneeded to guarantee a satisfactory session. Unfortunately,\nspatial data tends to be large. A few download requests\nto a large data set from a set of idle clients waiting to be\nserved can slow the server to a crawl. This is due to the fact\nthat the common client-server approach to transferring data\nbetween the two ends of a connection assumes a designated\nrole for each one of the ends (i.e, some clients and a server).\nWe built APPOINT as a centralized peer-to-peer system\nto demonstrate our approach for improving the common\nclient-server systems. A server still exists. There is a\ncentral source for the data and a decision mechanism for the\nservice. The environment still functions as a client-server\nenvironment under many circumstances. Yet, unlike many\ncommon client-server environments, APPOINT maintains\nmore information about the clients. This includes,\ninventories of what each client downloads, their availabilities, etc.\nWhen the client-server service starts to perform poorly or\na request for a data item comes from a client with a poor\nconnection to the server, APPOINT can start appointing\nappropriate active clients of the system to serve on behalf\nof the server, i.e., clients who have already volunteered their\nservices and can take on the role of peers (hence, moving\nfrom a client-server scheme to a peer-to-peer scheme). The\ndirectory service for the active clients is still performed by\nthe server but the server no longer serves all of the requests.\nIn this scheme, clients are used mainly for the purpose of\nsharing their networking resources rather than introducing\nnew content and hence they help o\ufb04oad the server and scale\nup the service. The existence of a server is simpler in terms\nof management of dynamic peers in comparison to pure\npeerto-peer approaches where a flood of messages to discover\nwho is still active in the system should be used by each peer\nthat needs to make a decision. The server is also the main\nsource of data and under regular circumstances it may not\nforward the service.\nData is assumed to be formed of files. A single file forms\nthe atomic means of communication. APPOINT optimizes\nrequests with respect to these atomic requests. Frequently\naccessed data sets are replicated as a byproduct of having\nbeen requested by a large number of users. This opens up\nthe potential for bypassing the server in future downloads for\nthe data by other users as there are now many new points of\naccess to it. Bypassing the server is useful when the server\"s\nbandwidth is limited. Existence of a server assures that\nunpopular data is also available at all times. The service\ndepends on the availability of the server. The server is now\nmore resilient to congestion as the service is more scalable.\nBackups and other maintenance activities are already\nbeing performed on the server and hence no extra\nadministrative effort is needed for the dynamic peers. If a peer goes\ndown, no extra precautions are taken. In fact, APPOINT\ndoes not require any additional resources from an already\nexisting client-server environment but, instead, expands its\ncapability. The peers simply get on to or get off from a table\non the server.\nUploading data is achieved in a similar manner as\ndownloading data. For uploads, the active clients can again be\nutilized. Users can upload their data to a set of peers other\nthan the server if the server is busy or resides in a distant\nlocation. Eventually the data is propagated to the server.\nAll of the operations are performed in a transparent\nfashion to the clients. Upon initial connection to the server,\nthey can be queried as to whether or not they want to share\ntheir idle networking time and disk space. The rest of the\noperations follow transparently after the initial contact.\nAPPOINT works on the application layer but not on lower\nlayers. This achieves platform independence and easy\ndeployment of the system. APPOINT is not a replacement but\nan addition to the current client-server architectures. We\ndeveloped a library of function calls that when placed in a\nclient-server architecture starts the service. We are\ndeveloping advanced peer selection schemes that incorporate the\nlocation of active clients, bandwidth among active clients,\ndata-size to be transferred, load on active clients, and\navailability of active clients to form a complete means of selecting\nthe best clients that can become efficient alternatives to the\nserver.\nWith APPOINT we are defining a very simple API that\ncould be used within an existing client-server system easily.\nInstead of denial of service or a slow connection, this API\ncan be utilized to forward the service appropriately. The\nAPI for the server side is:\nstart(serverPortNo)\nmakeFileAvailable(file,location,boolean)\ncallback receivedFile(file,location)\ncallback errorReceivingFile(file,location,error)\nstop()\nSimilarly the API for the client side is:\nstart(clientPortNo,serverPortNo,serverAddress)\nmakeFileAvailable(file,location,boolean)\nreceiveFile(file,location)\nsendFile(file,location)\nstop()\nThe server, after starting the APPOINT service, can make\nall of the data files available to the clients by using the\nmakeFileAvailable method. This will enable APPOINT\nto treat the server as one of the peers.\nThe two callback methods of the server are invoked when\na file is received from a client, or when an error is\nencountered while receiving a file from a client. APPOINT\nguar7\nFigure 2: The localization operation in APPOINT.\nantees that at least one of the callbacks will be called so\nthat the user (who may not be online anymore) can always\nbe notified (i.e., via email). Clients localizing large data\nfiles can make these files available to the public by using the\nmakeFileAvailable method on the client side.\nFor example, in our SAND Internet Browser, we have the\nlocalization of spatial data as a function that can be chosen\nfrom our menus. This functionality enables users to\ndownload data sets completely to their local disks before starting\ntheir queries or analysis. In our implementation, we have\ncalls to the APPOINT service both on the client and the\nserver sides as mentioned above. Hence, when a localization\nrequest comes to the SAND Internet Browser, the browser\nleaves the decisions to optimally find and localize a data set\nto the APPOINT service. Our server also makes its data\nfiles available over APPOINT. The mechanism for the\nlocalization operation is shown with more details from the\nAPPOINT protocols in Figure 2. The upload operation is\nperformed in a similar fashion.\n4. RELATED WORK\nThere has been a substantial amount of research on\nremote access to spatial data. One specific approach has\nbeen adopted by numerous Web-based mapping services\n(MapQuest [5], MapsOnUs [6], etc.). The goal in this\napproach is to enable remote users, typically only equipped\nwith standard Web browsers, to access the company\"s\nspatial database server and retrieve information in the form of\npictorial maps from them. The solution presented by most\nof these vendors is based on performing all the calculations\non the server side and transferring only bitmaps that\nrepresent results of user queries and commands. Although the\nadvantage of this solution is the minimization of both\nhardware and software resources on the client site, the resulting\nproduct has severe limitations in terms of available\nfunctionality and response time (each user action results in a new\nbitmap being transferred to the client).\nWork described in [9] examines a client-server\narchitecture for viewing large images that operates over a\nlowbandwidth network connection. It presents a technique\nbased on wavelet transformations that allows the\nminimization of the amount of data needed to be transferred over\nthe network between the server and the client. In this case,\nwhile the server holds the full representation of the large\nimage, only a limited amount of data needs to be transferred\nto the client to enable it to display a currently requested\nview into the image. On the client side, the image is\nreconstructed into a pyramid representation to speed up zooming\nand panning operations. Both the client and the server keep\na common mask that indicates what parts of the image are\navailable on the client and what needs to be requested. This\nalso allows dropping unnecessary parts of the image from the\nmain memory on the server.\nOther related work has been reported in [16] where a\nclient-server architecture is described that is designed to\nprovide end users with access to a server. It is assumed that\nthis data server manages vast databases that are impractical\nto be stored on individual clients. This work blends raster\ndata management (stored in pyramids [22]) with vector data\nstored in quadtrees [19, 20].\nFor our peer-to-peer transfer approach (APPOINT),\nNapster is the forefather where a directory service is centralized\non a server and users exchange music files that they have\nstored on their local disks. Our application domain, where\nthe data is already freely available to the public, forms a\nprime candidate for such a peer-to-peer approach. Gnutella\nis a pure (decentralized) peer-to-peer file exchange system.\nUnfortunately, it suffers from scalability issues, i.e., floods of\nmessages between peers in order to map connectivity in the\nsystem are required. Other systems followed these popular\nsystems, each addressing a different flavor of sharing over\nthe Internet. Many peer-to-peer storage systems have also\nrecently emerged. PAST [18], Eternity Service [7], CFS [10],\nand OceanStore [15] are some peer-to-peer storage systems.\nSome of these systems have focused on anonymity while\nothers have focused on persistence of storage. Also, other\napproaches, like SETI@Home [21], made other resources, such\nas idle CPUs, work together over the Internet to solve large\nscale computational problems. Our goal is different than\nthese approaches. With APPOINT, we want to improve\nexisting client-server systems in terms of performance by using\nidle networking resources among active clients. Hence, other\nissues like anonymity, decentralization, and persistence of\nstorage were less important in our decisions. Confirming\nthe authenticity of the indirectly delivered data sets is not\nyet addressed with APPOINT. We want to expand our\nresearch, in the future, to address this issue.\nFrom our perspective, although APPOINT employs some\nof the techniques used in peer-to-peer systems, it is also\nclosely related to current Web caching architectures.\nSquirrel [13] forms the middle ground. It creates a pure\npeer-topeer collaborative Web cache among the Web browser caches\nof the machines in a local-area network. Except for this\nrecent peer-to-peer approach, Web caching is mostly a\nwellstudied topic in the realm of server/proxy level caching [8,\n11, 14, 17]. Collaborative Web caching systems, the most\nrelevant of these for our research, focus on creating\neither a hierarchical, hash-based, central directory-based, or\nmulticast-based caching schemes. We do not compete with\nthese approaches. In fact, APPOINT can work in\ntandem with collaborative Web caching if they are deployed\ntogether. We try to address the situation where a request\narrives at a server, meaning all the caches report a miss.\nHence, the point where the server is reached can be used to\ntake a central decision but then the actual service request\ncan be forwarded to a set of active clients, i.e., the\ndown8\nload and upload operations. Cache misses are especially\ncommon in the type of large data-based services on which\nwe are working. Most of the Web caching schemes that are\nin use today employ a replacement policy that gives a\npriority to replacing the largest sized items over smaller-sized\nones. Hence, these policies would lead to the immediate\nreplacement of our relatively large data files even though they\nmay be used frequently. In addition, in our case, the user\ncommunity that accesses a certain data file may also be very\ndispersed from a network point of view and thus cannot take\nadvantage of any of the caching schemes. Finally, none of\nthe Web caching methods address the symmetric issue of\nlarge data uploads.\n5. A SAMPLE APPLICATION\nFedStats [1] is an online source that enables ordinary\ncitizens access to official statistics of numerous federal agencies\nwithout knowing in advance which agency produced them.\nWe are using a FedStats data set as a testbed for our work.\nOur goal is to provide more power to the users of FedStats\nby utilizing the SAND Internet Browser. As an example,\nwe looked at two data files corresponding to\nEnvironmental Protection Agency (EPA)-regulated facilities that have\nchlorine and arsenic, respectively. For each file, we had the\nfollowing information available: EPA-ID, name, street, city,\nstate, zip code, latitude, longitude, followed by flags to\nindicate if that facility is in the following EPA programs:\nHazardous Waste, Wastewater Discharge, Air Emissions,\nAbandoned Toxic Waste Dump, and Active Toxic Release.\nWe put this data into a SAND relation where the spatial\nattribute \u2018location\" corresponds to the latitude and\nlongitude. Some queries that can be handled with our system on\nthis data include:\n1. Find all EPA-regulated facilities that have arsenic and\nparticipate in the Air Emissions program, and:\n(a) Lie in Georgia to Illinois, alphabetically.\n(b) Lie within Arkansas or 30 miles within its border.\n(c) Lie within 30 miles of the border of Arkansas (i.e.,\nboth sides of the border).\n2. For each EPA-regulated facility that has arsenic, find\nall EPA-regulated facilities that have chlorine and:\n(a) That are closer to it than to any other\nEPAregulated facility that has arsenic.\n(b) That participate in the Air Emissions program\nand are closer to it than to any other\nEPAregulated facility which has arsenic. In order to\navoid reporting a particular facility more than\nonce, we use our \u2018group by EPA-ID\" mechanism.\nFigure 3 illustrates the output of an example query that\nfinds all arsenic sites within a given distance of the border of\nArkansas. The sites are obtained in an incremental manner\nwith respect to a given point. This ordering is shown by\nusing different color shades.\nWith this example data, it is possible to work with the\nSAND Internet Browser online as an applet (connecting to\na remote server) or after localizing the data and then\nopening it locally. In the first case, for each action taken, the\nclient-server architecture will decide what to ask for from\nthe server. In the latter case, the browser will use the\npeerto-peer APPOINT architecture for first localizing the data.\n6. CONCLUDING REMARKS\nAn overview of our efforts in providing remote access to\nlarge spatial data has been given. We have outlined our\napproaches and introduced their individual elements. Our\nclient-server approach improves the system performance by\nusing efficient caching methods when a remote server is\naccessed from thin-clients. APPOINT forms an alternative\napproach that improves performance under an existing\nclientserver system by using idle client resources when individual\nusers want work on a data set for longer periods of time\nusing their client computers.\nFor the future, we envision development of new efficient\nalgorithms that will support large online data transfers within\nour peer-to-peer approach using multiple peers\nsimultaneously. We assume that a peer (client) can become\nunavailable at any anytime and hence provisions need to be in place\nto handle such a situation. To address this, we will augment\nour methods to include efficient dynamic updates. Upon\ncompletion of this step of our work, we also plan to run\ncomprehensive performance studies on our methods.\nAnother issue is how to access data from different sources\nin different formats. In order to access multiple data sources\nin real time, it is desirable to look for a mechanism that\nwould support data exchange by design. The XML\nprotocol [3] has emerged to become virtually a standard for\ndescribing and communicating arbitrary data. GML [4] is\nan XML variant that is becoming increasingly popular for\nexchange of geographical data. We are currently working\non making SAND XML-compatible so that the user can\ninstantly retrieve spatial data provided by various agencies in\nthe GML format via their Web services and then explore,\nquery, or process this data further within the SAND\nframework. This will turn the SAND system into a universal tool\nfor accessing any spatial data set as it will be deployable on\nmost platforms, work efficiently given large amounts of data,\nbe able to tap any GML-enabled data source, and provide\nan easy to use graphical user interface. This will also\nconvert the SAND system from a research-oriented prototype\ninto a product that could be used by end users for\naccessing, viewing, and analyzing their data efficiently and with\nminimum effort.\n7. REFERENCES\n[1] Fedstats: The gateway to statistics from over 100 U.S.\nfederal agencies. http://www.fedstats.gov/, 2001.\n[2] Arcinfo: Scalable system of software for geographic\ndata creation, management, integration, analysis, and\ndissemination. http://www.esri.com/software/\narcgis/arcinfo/index.html, 2002.\n[3] Extensible markup language (xml).\nhttp://www.w3.org/XML/, 2002.\n[4] Geography markup language (gml) 2.0.\nhttp://opengis.net/gml/01-029/GML2.html, 2002.\n[5] Mapquest: Consumer-focused interactive mapping site\non the web. http://www.mapquest.com, 2002.\n[6] Mapsonus: Suite of online geographic services.\nhttp://www.mapsonus.com, 2002.\n[7] R. Anderson. The Eternity Service. In Proceedings of\nthe PRAGOCRYPT\"96, pages 242-252, Prague, Czech\nRepublic, September 1996.\n[8] L. Breslau, P. Cao, L. Fan, G. Phillips, and\nS. Shenker. Web caching and Zipf-like distributions:\n9\nFigure 3: Sample output from the SAND Internet Browser - Large dark dots indicate the result of a query\nthat looks for all arsenic sites within a given distance from Arkansas. Different color shades are used to\nindicate ranking order by the distance from a given point.\nEvidence and implications. In Proceedings of the IEEE\nInfocom\"99, pages 126-134, New York, NY, March\n1999.\n[9] E. Chang, C. Yap, and T. Yen. Realtime visualization\nof large images over a thinwire. In R. Yagel and\nH. Hagen, editors, Proceedings IEEE Visualization\"97\n(Late Breaking Hot Topics), pages 45-48, Phoenix,\nAZ, October 1997.\n[10] F. Dabek, M. F. Kaashoek, D. Karger, R. Morris, and\nI. Stoica. Wide-area cooperative storage with CFS. In\nProceedings of the ACM SOSP\"01, pages 202-215,\nBanff, AL, October 2001.\n[11] A. Dingle and T. Partl. Web cache coherence.\nComputer Networks and ISDN Systems,\n28(7-11):907-920, May 1996.\n[12] C. Esperan\u00b8ca and H. Samet. Experience with\nSAND/Tcl: a scripting tool for spatial databases.\nJournal of Visual Languages and Computing,\n13(2):229-255, April 2002.\n[13] S. Iyer, A. Rowstron, and P. Druschel. Squirrel: A\ndecentralized peer-to-peer Web cache. Rice\nUniversity/Microsoft Research, submitted for\npublication, 2002.\n[14] D. Karger, A. Sherman, A. Berkheimer, B. Bogstad,\nR. Dhanidina, K. Iwamoto, B. Kim, L. Matkins, and\nY. Yerushalmi. Web caching with consistent hashing.\nComputer Networks, 31(11-16):1203-1213, May 1999.\n[15] J. Kubiatowicz, D. Bindel, Y. Chen, S. Czerwinski,\nP. Eaton, D. Geels, R. Gummadi, S. Rhea,\nH. Weatherspoon, W. Weimer, C. Wells, and B. Zhao.\nOceanStore: An architecture for global-scale persistent\nstore. In Proceedings of the ACM ASPLOS\"00, pages\n190-201, Cambridge, MA, November 2000.\n[16] M. Potmesil. Maps alive: viewing geospatial\ninformation on the WWW. Computer Networks and\nISDN Systems, 29(8-13):1327-1342, September 1997.\nAlso Hyper Proceedings of the 6th International World\nWide Web Conference, Santa Clara, CA, April 1997.\n[17] M. Rabinovich, J. Chase, and S. Gadde. Not all hits\nare created equal: Cooperative proxy caching over a\nwide-area network. Computer Networks and ISDN\nSystems, 30(22-23):2253-2259, November 1998.\n[18] A. Rowstron and P. Druschel. Storage management\nand caching in PAST, a large-scale, persistent\npeer-to-peer storage utility. In Proceedings of the ACM\nSOSP\"01, pages 160-173, Banff, AL, October 2001.\n[19] H. Samet. Applications of Spatial Data Structures:\nComputer Graphics, Image Processing, and GIS.\nAddison-Wesley, Reading, MA, 1990.\n[20] H. Samet. The Design and Analysis of Spatial Data\nStructures. Addison-Wesley, Reading, MA, 1990.\n[21] SETI@Home. http://setiathome.ssl.berkeley.edu/,\n2001.\n[22] L. J. Williams. Pyramidal parametrics. Computer\nGraphics, 17(3):1-11, July 1983. Also Proceedings of\nthe SIGGRAPH\"83 Conference, Detroit, July 1983.\n10", "keywords": "large spatial datum;sand;datum visualization;remote access;internet;datum management;web browser;gi;dynamic network infrastructure;client/server;spatial query evaluation;client-server architecture;peer-to-peer;centralized peer-to-peer approach;internet-enabled database management system;network latency"} {"name": "train_C-55", "title": "Context Awareness for Group Interaction Support", "abstract": "In this paper, we present an implemented system for supporting group interaction in mobile distributed computing environments. First, an introduction to context computing and a motivation for using contextual information to facilitate group interaction is given. We then present the architecture of our system, which consists of two parts: a subsystem for location sensing that acquires information about the location of users as well as spatial proximities between them, and one for the actual context-aware application, which provides services for group interaction.", "fulltext": "1. INTRODUCTION\nToday\"s computing environments are characterized by an\nincreasing number of powerful, wirelessly connected mobile\ndevices. Users can move throughout an environment while\ncarrying their computers with them and having remote access to\ninformation and services, anytime and anywhere. New situations\nappear, where the user\"s context - for example his current\nlocation or nearby people - is more dynamic; computation does\nnot occur at a single location and in a single context any longer,\nbut comprises a multitude of situations and locations. This\ndevelopment leads to a new class of applications, which are\naware of the context in which they run in and thus bringing virtual\nand real worlds together.\nMotivated by this and the fact, that only a few studies have been\ndone for supporting group communication in such computing\nenvironments [12], we have developed a system, which we refer\nto as Group Interaction Support System (GISS). It supports group\ninteraction in mobile distributed computing environments in a\nway that group members need not to at the same place any longer\nin order to interact with each other or just to be aware of the\nothers situation.\nIn the following subchapters, we will give a short overview on\ncontext aware computing and motivate its benefits for supporting\ngroup interaction. A software framework for developing\ncontextsensitive applications is presented, which serves as middleware\nfor GISS. Chapter 2 presents the architecture of GISS, and chapter\n3 and 4 discuss the location sensing and group interaction\nconcepts of GISS in more detail. Chapter 5 gives a final summary\nof our work.\n1.1 What is Context Computing?\nAccording to Merriam-Webster\"s Online Dictionary1\n, context is\ndefined as the interrelated conditions in which something exists\nor occurs. Because this definition is very general, many\napproaches have been made to define the notion of context with\nrespect to computing environments.\nMost definitions of context are done by enumerating examples or\nby choosing synonyms for context. The term context-aware has\nbeen introduced first in [10] where context is referred to as\nlocation, identities of nearby people and objects, and changes to\nthose objects. In [2], context is also defined by an enumeration of\nexamples, namely location, identities of the people around the\nuser, the time of the day, season, temperature etc. [9] defines\ncontext as the user\"s location, environment, identity and time.\nHere we conform to a widely accepted and more formal\ndefinition, which defines context as any information than can be\nused to characterize the situation of an entity. An entity is a\nperson, place, or object that is considered relevant to the\ninteraction between a user and an application, including the user\nand applications themselves. [4]\n[4] identifies four primary types of context information\n(sometimes referred to as context dimensions), that are - with\nrespect to characterizing the situation of an entity - more\nimportant than others. These are location, identity, time and\nactivity, which can also be used to derive other sources of\ncontextual information (secondary context types). For example, if\nwe know a person\"s identity, we can easily derive related\ninformation about this person from several data sources (e.g. day\nof birth or e-mail address).\nAccording to this definition, [4] defines a system to be\ncontextaware if it uses context to provide relevant information and/or\nservices to the user, where relevancy depends on the user\"s task.\n[4] also gives a classification of features for context-aware\napplications, which comprises presentation of information and\nservices to a user, automatic execution of a service and tagging of\ncontext to information for later retrieval.\nFigure 1. Layers of a context-aware system\nContext computing is based on two major issues, namely\nidentifying relevant context (identity, location, time, activity) and\nusing obtained context (automatic execution, presentation,\ntagging). In order to do this, there are a few layers between (see\nFigure 1). First, the obtained low-level context information has to\nbe transformed, aggregated and interpreted (context\ntransformation) and represented in an abstract context world\nmodel (context representation), either centralized or\ndecentralized. Finally, the stored context information is used to\ntrigger certain context events (context triggering). [7]\n1.2 Group Interaction in Context\nAfter these abstract and formal definitions about what context and\ncontext computing is, we will now focus on the main goal of this\nwork, namely how the interaction of mobile group members can\nbe supported by using context information.\nIn [6] we have identified organizational systems to be crucial for\nsupporting mobile groups (see Figure 2). First, there has to be an\nInformation and Knowledge Management System, which is\ncapable of supporting a team with its information processing- and\nknowledge gathering needs. The next part is the Awareness\nSystem, which is dedicated to the perceptualisation of the effects\nof team activity. It does this by communicating work context,\nagenda and workspace information to the users. The Interaction\nSystems provide support for the communication among team\nmembers, either synchronous or asynchronous, and for the shared\naccess to artefacts, such as documents. Mobility Systems deploy\nmechanisms to enable any-place access to team memory as well\nas the capturing and delivery of awareness information from and\nto any places. Finally yet importantly, the organisational\ninnovation system integrates aspects of the team itself, like roles,\nleadership and shared facilities.\nWith respect to these five aspects of team support, we focus on\ninteraction and partly cover mobility- and awareness-support.\nGroup interaction includes all means that enable group members\nto communicate freely with all the other members. At this point,\nthe question how context information can be used for supporting\ngroup interaction comes up. We believe that information about\nthe current situation of a person provides a surplus value to\nexisting group interaction systems. Context information facilitates\ngroup interaction by allowing each member to be aware of the\navailability status or the current location of each other group\nmember, which again makes it possible to form groups\ndynamically, to place virtual post-its in the real world or to\ndetermine which people are around.\nFigure 2. Support for Mobile Groups [6]\nMost of today\"s context-aware applications use location and time\nonly, and location is referred to as a crucial type of context\ninformation [3]. We also see the importance of location\ninformation in mobile and ubiquitous environments, wherefore a\nmain focus of our work is on the utilization of location\ninformation and information about users in spatial proximity.\nNevertheless, we believe that location, as the only used type of\ncontext information, is not sufficient to support group interaction,\nwherefore we also take advantage of the other three primary\ntypes, namely identity, time and activity. This provides a\ncomprehensive description of a user\"s current situation and thus\nenabling numerous means for supporting group interaction, which\nare described in detail in chapter 4.4.\nWhen we look at the types of context information stated above,\nwe can see that all of them are single user-centred, taking into\naccount only the context of the user itself. We believe, that for the\nsupport of group interaction, the status of the group itself has also\nbe taken into account. Therefore, we have added a fifth\ncontextdimension group-context, which comprises more than the sum of\nthe individual member\"s contexts. Group context includes any\ninformation about the situation of a whole group, for example\nhow many members a group currently has or if a certain group\nmeets right now.\n1.3 Context Middleware\nThe Group Interaction Support System (GISS) uses the\nsoftwareframework introduced in [1], which serves as a middleware for\ndeveloping context-sensitive applications. This so-called Context\nFramework is based on a distributed communication architecture\nand it supports different kinds of transport protocols and message\ncoding mechanisms.\n89\nA main feature of the framework is the abstraction of context\ninformation retrieval via various sensors and its delivery to a level\nwhere no difference appears, for the application designer,\nbetween these different kinds of context retrieval mechanisms; the\ninformation retrieval is hidden from the application developer.\nThis is achieved by so-called entities, which describe\nobjectse.g. a human user - that are important for a certain context\nscenario.\nEntities express their functionality by the use of so-called\nattributes, which can be loaded into the entity. These attributes\nare complex pieces of software, which are implemented as Java\nclasses. Typical attributes are encapsulations of sensors, but they\ncan also be used to implement context services, for example to\nnotify other entities about location changes of users.\nEach entity can contain a collection of such attributes, where an\nentity itself is an attribute. The initial set of attributes an entity\ncontains can change dynamically at runtime, if an entity loads or\nunloads attributes from the local storage or over the network. In\norder to load and deploy new attributes, an entity has to reference\na class loader and a transport and lookup layer, which manages\nthe lookup mechanism for discovering other entities and the\ntransport. XML configuration files specify which initial set of\nentities should be loaded and which attributes these entities own.\nThe communication between entities and attributes is based on\ncontext events. Each attribute is able to trigger events, which are\naddressed to other attributes and entities respectively,\nindependently on which physical computer they are running.\nAmong other things, and event contains the name of the event and\na list of parameters delivering information about the event itself.\nRelated with this event-based architecture is the use of ECA\n(Event-Condition-Action)-rules for defining the behaviour of the\ncontext system. Therefore, every entity has a rule-interpreter,\nwhich catches triggered events, checks conditions associated with\nthem and causes certain actions. These rules are referenced by the\nentity\"s XML configuration. A rule itself is even able to trigger\nthe insertion of new rules or the unloading of existing rules at\nruntime in order to change the behaviour of the context system\ndynamically.\nTo sum up, the context framework provides a flexible, distributed\narchitecture for hiding low-level sensor data from high-level\napplications and it hides external communication details from the\napplication developer. Furthermore, it is able to adapt its\nbehaviour dynamically by loading attributes, entities or\nECArules at runtime.\n2. ARCHITECTURE OVERVIEW\nAs GISS uses the Context Framework described in chapter 1.3 as\nmiddleware, every user is represented by an entity, as well as the\ncentral server, which is responsible for context transformation,\ncontext representation and context triggering (cf. Figure 1).\nA main part of our work is about the automated acquisition of\nposition information and its sensor-independent provision at\napplication level. We do not only sense the current location of\nusers, but also determine spatial proximities between them.\nDeveloping the architecture, we focused on keeping the client as\nsimple as possible and reducing the communication between\nclient and server to a minimum.\nEach client may have various location and/or proximity sensors\nattached, which are encapsulated by respective Context\nFramework-attributes (Sensor Encapsulation). These attributes\nare responsible for integrating native sensor-implementations into\nthe Context Framework and sending sensor-dependent position\ninformation to the server. We consider it very important to\nsupport different types of sensors even at the same time, in order\nto improve location accuracy on the one hand, while providing a\npervasive location-sensing environment with seamless transition\nbetween different location sensing techniques on the other hand.\nAll location- and proximity-sensors supported are represented by\nserver-side context-attributes, which correspond to the client-side\nsensor encapsulation-attributes and abstract the sensor-dependent\nposition information received from all users via the wireless\nnetwork (sensor abstraction). This requires a context repository,\nwhere the mapping of diverse physical positions to standardized\nlocations is stored.\nThe standardized location- and proximity-information of each\nuser is then passed to the so-called Sensor Fusion-attributes,\none for symbolic locations and a second one for spatial\nproximities. Their job is to merge location- and\nproximityinformation of clients, respectively, which is described in detail in\nChapter 3.3. Every time the symbolic location of a user or the\nspatial proximity between two users changes, the Sensor\nFusion-attributes notify the GISS Core-attribute, which\ncontrols the application.\nBecause of the abstraction of sensor-dependent position\ninformation, the system can easily be extended by additional\nsensors, just by implementing the (typically two) attributes for\nencapsulating sensors (some sensors may not need a client-side\npart), abstracting physical positions and observing the interface to\nGISS Core.\nFigure 3. Architecture of the Group Interaction Support\nSystem (GISS)\nThe GISS Core-attribute is the central coordinator of the\napplication as it shows to the user. It not only serves as an\ninterface to the location-sensing subsystem, but also collects\nfurther context information in other dimensions (time, identity or\nactivity).\n90\nEvery time a change in the context of one or more users is\ndetected, GISS Core evaluates the effect of these changes on\nthe user, on the groups he belongs to and on the other members of\nthese groups. Whenever necessary, events are thrown to the\naffected clients to trigger context-aware activities, like changing\nthe presentation of awareness information or the execution of\nservices.\nThe client-side part of the application is kept as simple as\npossible. Furthermore, modular design was not only an issue on\nthe sensor side but also when designing the user interface\narchitecture. Thus, the complete user interface can be easily\nexchanged, if all of the defined events are taken into account and\nunderstood by the new interface-attribute.\nThe currently implemented user interface is split up in two parts,\nwhich are also represented by two attributes. The central attribute\non client-side is the so-called Instant Messenger Encapsulation,\nwhich on the one hand interacts with the server through events\nand on the other hand serves as a proxy for the external\napplication the user interface is built on.\nAs external application, we use an existing open source instant\nmessenger - the ICQ2\n-compliant Simple Instant Messenger\n(SIM)3\n. We have chosen and instant messenger as front-end\nbecause it provides a well-known interface for most users and\nfacilitates a seamless integration of group interaction support, thus\nincreasing acceptance and ease of use. As the basic functionality\nof the instant messenger - to serve as a client in an instant\nmessenger network - remains fully functional, our application is\nable to use the features already provided by the messenger. For\nexample, the contexts activity and identity are derived from the\nmessenger network as it is described later.\nThe Instant Messenger Encapsulation is also responsible for\nsupporting group communication. Through the interface of the\nmessenger, it provides means of synchronous and asynchronous\ncommunication as well as a context-aware reminder system and\ntools for managing groups and the own availability status.\nThe second part of the user interface is a visualisation of the\nuser\"s locations, which is implemented in the attribute Viewer.\nThe current implementation provides a two-dimensional map of\nthe campus, but it can easily be replaced by other visualisations, a\nthree-dimensional VRML-model for example. Furthermore, this\nvisualisation is used to show the artefacts for asynchronous\ncommunication. Based on a floor plan-view of the geographical\narea the user currently resides in, it gives a quick overview of\nwhich people are nearby, their state and provides means to\ninteract with them.\nIn the following chapters 3 and 4, we describe the location\nsensing-backend and the application front-end for supporting\ngroup interaction in more detail.\n3. LOCATION SENSING\nIn the following chapter, we will introduce a location model,\nwhich is used for representing locations; afterwards, we will\ndescribe the integration of location- and proximity-sensors in\n2\nhttp://www.icq.com/\n3\nhttp://sim-icq.sourceforge.net\nmore detail. Finally, we will have a closer look on the fusion of\nlocation- and proximity-information, acquired by various sensors.\n3.1 Location Model\nA location model (i.e. a context representation for the\ncontextinformation location) is needed to represent the locations of users,\nin order to be able to facilitate location-related queries like given\na location, return a list of all the objects there or given an\nobject, return its current location. In general, there are two\napproaches [3,5]: symbolic models, which represent location as\nabstract symbols, and a geometric model, which represent\nlocation as coordinates.\nWe have chosen a symbolic location model, which refers to\nlocations as abstract symbols like Room P111 or Physics\nBuilding, because we do not require geometric location data.\nInstead, abstract symbols are more convenient for human\ninteraction at application level. Furthermore, we use a symbolic\nlocation containment hierarchy similar to the one introduced in\n[11], which consists of top-level regions, which contain buildings,\nwhich contain floors, and the floors again contain rooms. We also\ndistinguish four types, namely region (e.g. a whole campus),\nsection (e.g. a building or an outdoor section), level (e.g. a certain\nfloor in a building) and area (e.g. a certain room). We introduce a\nfifth type of location, which we refer to as semantic. These\nsocalled semantic locations can appear at any level in the hierarchy\nand they can be nested, but they do not necessarily have a\ngeographic representation. Examples for such semantic locations\nare tagged objects within a room (e.g. a desk and a printer on this\ndesk) or the name of a department, which contains certain rooms.\nFigure 4. Symbolic Location Containment Hierarchy\nThe hierarchy of symbolic locations as well as the type of each\nposition is stored in the context repository.\n3.2 Sensors\nOur architecture supports two different kinds of sensors: location\nsensors, which acquire location information, and proximity\nsensors, which detect spatial proximities between users.\nAs described above, each sensor has a server- and in most cases a\ncorresponding client-side-implementation, too. While the\nclientattributes (Sensor Abstraction) are responsible for acquiring\nlow-level sensor-data and transmitting it to the server, the\ncorresponding Sensor Encapsulation-attributes transform them\ninto a uniform and sensor-independent format, namely symbolic\nlocations and IDs of users in spatial proximity, respectively.\n91\nAfterwards, the respective attribute Sensor Fusion is being\ntriggered with this sensor-independent information of a certain\nuser, detected by a particular sensor. Such notifications are\nperformed every time the sensor acquired new information.\nAccordingly, Sensor Abstraction-attributes are responsible to\ndetect when a certain sensor is no longer available on the client\nside (e.g. if it has been unplugged by the user) or when position\nrespectively proximity could not be determined any longer (e.g.\nRFID reader cannot detect tags) and notify the corresponding\nsensor fusion about this.\n3.2.1 Location Sensors\nIn order to sense physical positions, the Sensor\nEncapsulationattributes asynchronously transmit sensor-dependent position\ninformation to the server. The corresponding location Sensor\nAbstraction-attributes collect these physical positions delivered\nby the sensors of all users, and perform a repository-lookup in\norder to get the associated symbolic location. This requires certain\ntables for each sensor, which map physical positions to symbolic\nlocations. One physical position may have multiple symbolic\nlocations at different accuracy-levels in the location hierarchy\nassigned to, for example if a sensor covers several rooms. If such\na mapping could be found, an event is thrown in order to notify\nthe attribute Location Sensor Fusion about the symbolic\nlocations a certain sensor of a particular user determined.\nWe have prototypically implemented three kinds of location\nsensors, which are based on WLAN (IEEE 802.11), Bluetooth and\nRFID (Radio Frequency Identification). We have chosen these\nthree completely different sensors because of their differences\nconcerning accuracy, coverage and administrative effort, in order\nto evaluate the flexibility of our system (see Table 1).\nThe most accurate one is an RFID sensor, which is based on an\nactive RFID-reader. As soon as the reader is plugged into the\nclient, it scans for active RFID tags in range and transmits their\nserial numbers to the server, where they are mapped to symbolic\nlocations. We also take into account RSSI (Radio Signal Strength\nInformation), which provides position accuracy of few\ncentimetres and thus enables us to determine which RFID-tag is\nnearest. Due to this high accuracy, RFID is used for locating users\nwithin rooms. The administration is quite simple; once a new\nRFID tag is placed, its serial number simply has to be assigned to\na single symbolic location. A drawback is the poor availability,\nwhich can be traced back to the fact that RFID readers are still\nvery expensive.\nThe second one is an 802.11 WLAN sensor. Therefore, we\nintegrated a purely software-based, commercial WLAN\npositioning system for tracking clients on the university\ncampuswide WLAN infrastructure. The reached position accuracy is in\nthe range of few meters and thus is suitable for location sensing at\nthe granularity of rooms. A big disadvantage is that a map of the\nwhole area has to be calibrated with measuring points at a\ndistance of 5 meters each. Because most mobile computers are\nequipped with WLAN technology and the positioning-system is a\nsoftware-only solution, nearly everyone is able to use this kind of\nsensor.\nFinally, we have implemented a Bluetooth sensor, which detects\nBluetooth tags (i.e. Bluetooth-modules with known position) in\nrange and transmits them to the server that maps to symbolic\nlocations. Because of the fact that we do not use signal\nstrengthinformation in the current implementation, the accuracy is above\n10 meters and therefore a single Bluetooth MAC address is\nassociated with several symbolic locations, according to the\nphysical locations such a Bluetooth module covers. This leads to\nthe disadvantage that the range of each Bluetooth-tag has to be\ndetermined and mapped to symbolic locations within this range.\nTable 1. Comparison of implemented sensors\nSensor Accuracy Coverage Administration\nRFID < 10 cm poor easy\nWLAN 1-4 m very well\nvery\ntimeconsuming\nBluetooth ~ 10 m well time-consuming\n3.2.2 Proximity Sensors\nAny sensor that is able to detect whether two users are in spatial\nproximity is referred to as proximity sensor. Similar to the\nlocation sensors, the Proximity Sensor Abstraction-attributes\ncollect physical proximity information of all users and transform\nthem to mappings of user-IDs.\nWe have implemented two types of proximity-sensors, which are\nbased on Bluetooth on the one hand and on fused symbolic\nlocations (see chapter 3.3.1) on the other hand.\nThe Bluetooth-implementation goes along with the\nimplementation of the Bluetooth-based location sensor. The\nalready determined Bluetooth MAC addresses in range of a\ncertain client are being compared with those of all other clients,\nand each time the attribute Bluetooth Sensor Abstraction\ndetects congruence, it notifies the proximity sensor fusion about\nthis.\nThe second sensor is based on symbolic locations processed by\nLocation Sensor Fusion, wherefore it does not need a client-side\nimplementation. Each time the fused symbolic location of a\ncertain user changes, it checks whether he is at the same symbolic\nlocation like another user and again notifies the proximity sensor\nfusion about the proximity between these two users. The range\ncan be restricted to any level of the location containment\nhierarchy, for example to room granularity.\nA currently unresolved issue is the incomparable granularity of\ndifferent proximity sensors. For example, the symbolic locations\nat same level in the location hierarchy mostly do not cover the\nsame geographic area.\n3.3 Sensor Fusion\nCore of the location sensing subsystem is the sensor fusion. It\nmerges data of various sensors, while coping with differences\nconcerning accuracy, coverage and sample-rate. According to the\ntwo kinds of sensors described in chapter 3.2, we distinguish\nbetween fusion of location sensors on the one hand, and fusion of\nproximity sensors on the other hand.\nThe fusion of symbolic locations as well as the fusion of spatial\nproximities operates on standardized information (cf. Figure 3).\nThis has the advantage, that additional position- and\nproximitysensors can be added easily or the fusion algorithms can be\nreplaced by ones that are more sophisticated.\n92\nFusion is performed for each user separately and takes into\naccount the measurements at a single point in time only (i.e. no\nhistory information is used for determining the current location of\na certain user). The algorithm collects all events thrown by the\nSensor Abstraction-attributes, performs fusion and triggers the\nGISS Core-attribute if the symbolic location of a certain user or\nthe spatial proximity between users changed.\nAn important feature is the persistent storage of location- and\nproximity-history in a database in order to allow future retrieval.\nThis enables applications to visualize the movement of users for\nexample.\n3.3.1 Location Sensor Fusion\nGoal of the fusion of location information is to improve precision\nand accuracy by merging the set of symbolic locations supplied\nby various location sensors, in order to reduce the number of\nthese locations to a minimum, ideally to a single symbolic\nlocation per user. This is quite difficult, because different sensors\nmay differ in accuracy and sample rate as well.\nThe Location Sensor Fusion-attribute is triggered by events,\nwhich are thrown by the Location Sensor\nAbstractionattributes. These events contain information about the identity of\nthe user concerned, his current location and the sensor by which\nthe location has been determined.\nIf the attribute Location Sensor Fusion receives such an event,\nit checks if the amount of symbolic locations of the user\nconcerned has changed (compared with the last event). If this is\nthe case, it notifies the GISS Core-attribute about all symbolic\nlocations this user is currently associated with.\nHowever, this information is not very useful on its own if a\ncertain user is associated with several locations. As described in\nchapter 3.2.1, a single location sensor may deliver multiple\nsymbolic locations. Moreover, a certain user may have several\nlocation sensors, which supply symbolic locations differing in\naccuracy (i.e. different levels in the location containment\nhierarchy). To cope with this challenge, we implemented a fusion\nalgorithm in order to reduce the number of symbolic locations to a\nminimum (ideally to a single location).\nIn a first step, each symbolic location is associated with its\nnumber of occurrences. A symbolic location may occur several\ntimes if it is referred to by more than one sensor or if a single\nsensor detects multiple tags, which again refer to several\nlocations. Furthermore, this number is added to the previously\ncalculated number of occurrences of each symbolic location,\nwhich is a child-location of the considered one in the location\ncontainment hierarchy. For example, if - in Figure 4 - room2\noccurs two times and desk occurs a single time, the value 2 of\nroom2 is added to the value 1 of desk, whereby desk finally\ngets the value 3. In a final step, only those symbolic locations are\nleft which are assigned with the highest number of occurrences.\nA further reduction can be achieved by assigning priorities to\nsensors (based on accuracy and confidence) and cumulating these\npriorities for each symbolic location instead of just counting the\nnumber of occurrences.\nIf the remaining fused locations have changed (i.e. if they differ\nfrom the fused locations the considered user is currently\nassociated with), they are provided with the current timestamp,\nwritten to the database and the GISS-attribute is notified about\nwhere the user is probably located.\nFinally, the most accurate, common location in the location\nhierarchy is calculated (i.e. the least upper bound of these\nsymbolic locations) in order to get a single symbolic location. If it\nchanges, the GISS Core-attribute is triggered again.\n3.3.2 Proximity Sensor Fusion\nProximity sensor fusion is much simpler than the fusion of\nsymbolic locations. The corresponding proximity sensor\nfusionattribute is triggered by events, which are thrown by the\nProximity Sensor Abstraction-attributes. These special events\ncontain information about the identity of the two users concerned,\nif they are currently in spatial proximity or if proximity no longer\npersists, and by which proximity-sensor this has been detected.\nIf the sensor fusion-attribute is notified by a certain Proximity\nSensor Abstraction-attribute about an existing spatial proximity,\nit first checks if these two users are already known to be in\nproximity (detected either by another user or by another\nproximity-sensor of the user, which caused the event). If not, this\nchange in proximity is written to the context repository with\ncurrent timestamp. Similarly, if the attribute Proximity Fusion\nis notified about an ended proximity, it checks if the users are still\nknown to be in proximity, and writes this change to the repository\nif not.\nFinally, if spatial proximity between the two users actually\nchanged, an event is thrown to notify the GISS Core-attribute\nabout this.\n4. CONTEXTSENSITIVE INTERACTION\n4.1 Overview\nIn most of today\"s systems supporting interaction in groups, the\nprovided means lack any awareness of the user\"s current context,\nthus being unable to adapt to his needs.\nIn our approach, we use context information to enhance\ninteraction and provide further services, which offer new\npossibilities to the user. Furthermore, we believe that interaction\nin groups also has to take into account the current context of the\ngroup itself and not only the context of individual group\nmembers. For this reason, we also retrieve information about the\ngroup\"s current context, derived from the contexts of the group\nmembers together with some sort of meta-information (see\nchapter 4.3).\nThe sources of context used for our application correspond with\nthe four primary context types given in chapter 1.1 - identity (I),\nlocation (L), time (T) and activity (A). As stated before, we also\ntake into account the context of the group the user is interaction\nwith, so that we could add a fifth type of context\ninformationgroup awareness (G) - to the classification. Using this context\ninformation, we can trigger context-aware activities in all of the\nthree categories described in chapter 1.1 - presentation of\ninformation (P), automatic execution of services (A) and tagging\nof context to information for later retrieval (T).\nTable 2 gives an overview of activities we have already\nimplemented; they are described comprehensively in chapter 4.4.\nThe table also shows which types of context information are used\nfor each activity and the category the activity could be classified\nin.\n93\nTable 2. Classification of implemented context-aware\nactivities\nService L T I A G P A T\nLocation Visualisation X X X\nGroup Building Support X X X X\nSupport for Synchronous\nCommunication\nX X X X\nSupport for Asynchronous\nCommunication\nX X X X X X X\nAvailability Management X X X\nTask Management Support X X X X\nMeeting Support X X X X X X\nReasons for implementing these very features are to take\nadvantage of all four types of context information in order to\nsupport group interaction by utilizing a comprehensive knowledge\nabout the situation a single user or a whole group is in.\nA critical issue for the user acceptance of such a system is the\nusability of its interface. We have evaluated several ways of\npresenting context-aware means of interaction to the user, until\nwe came to the solution we use right now. Although we think that\nthe user interface that has been implemented now offers the best\ntrade-off between seamless integration of features and ease of use,\nit would be no problem to extend the architecture with other user\ninterfaces, even on different platforms.\nThe chosen solution is based on an existing instant messenger,\nwhich offers several possibilities to integrate our system (see\nchapter 4.2). The biggest advantage of this approach is that the\nuser is confronted with a graphical user interface he is already\nused to in most cases. Furthermore, our system uses an instant\nmessenger account as an identifier, so that the user does not have\nto register a further account anywhere else (for example, the user\ncan use his already existing ICQ2\n-account).\n4.2 Instant Messenger Integration\nOur system is based upon an existing instant messenger, the\nsocalled Simple Instant Messenger (SIM)3\n. The implementation of\nthis messenger is carried out as a project at Sourceforge4\n.\nSIM supports multiple messenger protocols such as AIM5\n, ICQ2\nand MSN6\n. It also supports connections to multiple accounts at\nthe same time. Furthermore, full support for SMS-notification\n(where provided from the used protocol) is given.\nSIM is based on a plug-in concept. All protocols as well as parts\nof the user-interface are implemented as plug-ins. Its architecture\nis also used to extend the application\"s abilities to communicate\nwith external applications. For this purpose, a remote control\nplug-in is provided, by which SIM can be controlled from\nexternal applications via socket connection. This remote control\ninterface is extensively used by GISS for retrieving the contact\nlist, setting the user\"s availability-state or sending messages. The\n4\nhttp://sourceforge.net/\n5\nhttp://www.aim.com/\n6\nhttp://messenger.msn.com/\nfunctionality of the plug-in was extended in several ways, for\nexample to accept messages for an account (as if they would have\nbeen sent via the messenger network).\nThe messenger, more exactly the contact list (i.e. a list of profiles\nof all people registered with the instant messenger, which is\nvisualized by listing their names as it can be seen in Figure 5), is\nalso used to display locations of other members of the groups a\nuser belongs to. This provides location awareness without taking\ntoo much space or requesting the user\"s full attention. A more\ncomprehensive description of these features is given in chapter\n4.4.\n4.3 Sources of Context Information\nWhile the location-context of a user is obtained from our location\nsensing subsystem described in chapter 3, we consider further\ntypes of context than location relevant for the support of group\ninteraction, too.\nLocal time as a very important context dimension can be easily\nretrieved from the real time clock of the user\"s system. Besides\nlocation and time, we also use context information of user\"s\nactivity and identity, where we exploit the functionality provided\nby the underlying instant messenger system. Identity (or more\nexactly, the mapping of IDs to names as well as additional\ninformation from the user\"s profile) can be distilled out of the\ncontents of the user\"s contact list.\nInformation about the activity or a certain user is only available in\na very restricted area, namely the activity at the computer itself.\nOther activities like making a phone call or something similar,\ncannot be recognized with the current implementation of the\nactivity sensor. The only context-information used is the instant\nmessenger\"s availability state, thus only providing a very coarse\nclassification of the user\"s activity (online, offline, away, busy\netc.). Although this may not seem to be very much information, it\nis surely relevant and can be used to improve or even enable\nseveral services.\nHaving collected the context information from all available users,\nit is now possible to distil some information about the context of a\ncertain group. Information about the context of a group includes\nhow many members the group currently has, if the group meets\nright now, which members are participating at a meeting, how\nmany members have read which of the available posts from other\nteam members and so on.\nTherefore, some additional information like a list of members for\neach group is needed. These lists can be assembled manually (by\nusers joining and leaving groups) or retrieved automatically. The\ncontext of a group is secondary context and is aggregated from\nthe available contexts of the group members. Every time the\ncontext of a single group member changes, the context of the\nwhole group is changing and has to be recalculated.\nWith knowledge about a user\"s context and the context of the\ngroups he belongs to, we can provide several context-aware\nservices to the user, which enhance his interaction abilities. A\nbrief description of these services is given in chapter 4.4.\n94\n4.4 Group Interaction Support\n4.4.1 Visualisation of Location Information\nAn important feature is the visualisation of location information,\nthus allowing users to be aware of the location of other users and\nmembers of groups he joined, respectively.\nAs already described in chapter 2, we use two different forms of\nvisualisation. The maybe more important one is to display\nlocation information in the contact list of the instant messenger,\nright beside the name, thus being always visible while not\ndrawing the user\"s attention on it (compared with a\ntwodimensional view for example, which requires a own window for\ndisplaying a map of the environment).\nDue to the restricted space in the contact list, it has been\nnecessary to implement some sort of level-of-detail concept. As\nwe use a hierarchical location model, we are able to determine the\nmost accurate common location of two users. In the contact list,\nthe current symbolic location one level below the previously\ncalculated common location is then displayed. If, for example,\nuser A currently resides in room P121 at the first floor of a\nbuilding and user B, which has to be displayed in the contact list\nof user A, is in room P304 at the third floor, the most accurate\ncommon location of these two users is the building they are in.\nFor that reason, the floor (i.e. one level beyond the common\nlocation, namely the building) of user B is displayed in the\ncontact list of user A. If both people reside on the same floor or\neven in the same room, the room would be taken.\nFigure 5 shows a screenshot of the Simple Instant Messenger3\n,\nwhere the current location of those people, whose location is\nknown by GISS, is displayed in brackets right beside their name.\nOn top of the image, the heightened, integrated GISS-toolbar is\nshown, which currently contains the following, implemented\nfunctionality (from left to right): Asynchronous communication\nfor groups (see chapter 4.4.4), context-aware reminders (see\nchapter 4.4.6), two-dimensional visualisation of\nlocationinformation, forming and managing groups (see chapter 4.4.2),\ncontext-aware availability-management (see chapter 4.4.5) and\nfinally a button for terminating GISS.\nFigure 5. GISS integration in Simple Instant Messenger3\nAs displaying just this short form of location may not be enough\nfor the user, because he may want to see the most accurate\nposition available, a fully qualified position is shown if a name\nin the contact-list is clicked (e.g. in the form of\ndesk@room2@department1@1stfloor@building 1@campus).\nThe second possible form of visualisation is a graphical one. We\nhave evaluated a three-dimensional view, which was based on a\nVRML model of the respective area (cf. Figure 6). Due to lacks in\nnavigational and usability issues, we decided to use a\ntwodimensional view of the floor (it is referred to as level in the\nlocation hierarchy, cf. Figure 4). Other levels of granularity like\nsection (e.g. building) and region (e.g. campus) are also provided.\nIn this floor-plan-based view, the current locations are shown in\nthe manner of ICQ2\ncontacts, which are placed at the currently\nsensed location of the respective person. The availability-status of\na user, for example away if he is not on the computer right now,\nor busy if he does not want to be disturbed, is visualized by\ncolour-coding the ICQ2\n-flower left beside the name. Furthermore,\nthe floor-plan-view shows so-called the virtual post-its, which are\nvirtual counterparts of real-life post-its and serve as our means of\nasynchronous communication (more about virtual post-its can be\nfound in chapter 4.4.4).\nFigure 6. 3D-view of the floor (VRML)\nFigure 7 shows the two-dimensional map of a certain floor, where\nseveral users are currently located (visualized by their name and\nthe flower left beside). The location of the client, on which the\nmap is displayed, is visualized by a green circle. Down to the\nright, two virtual post-its can be seen.\nFigure 7. 2D view of the floor\nAnother feature of the 2D-view is the visualisation of\nlocationhistory of users. As we store the complete history of a user\"s\nlocations together with a timestamp, we are able to provide\ninformation about the locations he has been back in time. When\nthe mouse is moved over the name of a certain user in the\n2Dview, footprints of a user, placed at the locations he has been,\nare faded out the stronger, the older the location information is.\n95\n4.4.2 Forming and Managing Groups\nTo support interaction in groups, it is first necessary to form\ngroups. As groups can have different purposes, we distinguish two\ntypes of groups.\nSo-called static groups are groups, which are built up manually\nby people joining and leaving them. Static groups can be further\ndivided into two subtypes. In open static groups, everybody can\njoin and leave anytime, useful for example to form a group of\nlecture attendees of some sort of interest group. Closed static\ngroups have an owner, who decides, which persons are allowed to\njoin, although everybody could leave again at any time. Closed\ngroups enable users for example to create a group of their friends,\nthus being able to communicate with them easily.\nIn contrast to that, we also support the creation of dynamic\ngroups. They are formed among persons, who are at the same\nlocation at the same time. The creation of dynamic groups is only\nperformed at locations, where it makes sense to form groups, for\nexample in lecture halls or meeting rooms, but not on corridors or\noutdoor. It would also be not very meaningful to form a group\nonly of the people residing in the left front sector of a hall;\ninstead, the complete hall should be considered. For these\nreasons, all the defined locations in the hierarchy are tagged,\nwhether they allow the formation of groups or not. Dynamic\ngroups are also not only formed granularity of rooms, but also on\nhigher levels in the hierarchy, for example with the people\ncurrently residing in the area of a department.\nAs the members of dynamic groups constantly change, it is\npossible to create an open static group out of them.\n4.4.3 Synchronous Communication for Groups\nThe most important form of synchronous communication on\ncomputers today is instant messaging; some people even see\ninstant messaging to be the real killer application on the Internet.\nThis has also motivated the decision to build GISS upon an\ninstant messaging system.\nIn today\"s messenger systems, peer-to-peer-communication is\nextensively supported. However, when it comes to\ncommunication in groups, the support is rather poor most of the\ntime. Often, only sending a message to multiple recipients is\nsupported, lacking means to take into account the current state of\nthe recipients. Furthermore, groups can only be formed of\nmembers in one\"s contact list, thus being not able to send\nmessages to a group, where not all of its members are known\n(which may be the case in settings, where the participants of a\nlecture form a group).\nOur approach does not have the mentioned restrictions. We\nintroduce group-entries in the user\"s contact list; enable him or\nhis to send messages to this group easily, without knowing who\nexactly is currently a member of this group. Furthermore, group\nmessages are only delivered to persons, who are currently not\nbusy, thus preventing a disturbance by a message, which is\npossibly unimportant for the user.\nThese features cannot be carried out in the messenger network\nitself, so whenever a message to a group account is sent, we\nintercept it and route it through our system to all the recipients,\nwhich are available at a certain time. Communication via a group\naccount is also stored centrally, enabling people to query missed\nmessages or simply viewing the message history.\n4.4.4 Asynchronous Communication for Groups\nAsynchronous communication in groups is not a new idea. The\ngoal of this approach is not to reinvent the wheel, as email is\nmaybe the most widely used form of asynchronous\ncommunication on computers and is broadly accepted and\nstandardized. In out work, we aim at the combination of\nasynchronous communication with location awareness.\nFor this reason, we introduce the concept of so-called virtual\npostits (cp. [13]), which are messages that are bound to physical\nlocations. These virtual post-its could be either visible for all\nusers that are passing by or they can be restricted to be visible for\ncertain groups of people only. Moreover, a virtual post-it can also\nhave an expiry date after which it is dropped and not displayed\nanymore. Virtual post-its can also be commented by others, thus\nproviding some from of forum-like interaction, where each post-it\nforms a thread.\nVirtual post-its are displayed automatically, whenever a user\n(available) passes by the first time. Afterwards, post-its can be\naccessed via the 2D-viewer, where all visible post-its are shown.\nAll readers of a post-it are logged and displayed when viewing it,\nproviding some sort of awareness about the group members\"\nactivities in the past.\n4.4.5 Context-aware Availability Management\nInstant messengers in general provide some kind of availability\ninformation about a user. Although this information can be only\ndefined in a very coarse granularity, we have decided to use these\nmeans of gathering activity context, because the introduction of\nan additional one would strongly decrease the usability of the\nsystem.\nTo support the user managing his availability, we provide an\ninterface that lets the user define rules to adapt his availability to\nthe current context. These rules follow the form on event (E) if\ncondition (C) then action (A), which is directly supported by the\nECA-rules of the Context Framework described in chapter 1.3.\nThe testing of conditions is periodically triggered by throwing\nevents (whenever the context of a user changes). The condition\nitself is defined by the user, who can demand the change of his\navailability status as the action in the rule. As a condition, the user\ncan define his location, a certain time (also triggering daily, every\nweek or every month) or any logical combination of these criteria.\n4.4.6 Context-Aware Reminders\nReminders [14] are used to give the user the opportunity of\ndefining tasks and being reminded of those, when certain criteria\nare fulfilled. Thus, a reminder can be seen as a post-it to oneself,\nwhich is only visible in certain cases. Reminders can be bound to\na certain place or time, but also to spatial proximity of users or\ngroups. These criteria can be combined with Boolean operators,\nthus providing a powerful means to remind the user of tasks that\nhe wants to carry out when a certain context occurs.\nA reminder will only pop up the first time the actual context\nmeets the defined criterion. On showing up the reminder, the user\nhas the chance to resubmit it to be reminded again, for example\nfive minutes later or the next time a certain user is in spatial\nproximity.\n96\n4.4.7 Context-Aware Recognition and Notification of\nGroup Meetings\nWith the available context information, we try to recognize\nmeetings of a group. The determination of the criteria, when the\nsystem recognizes a group having a meeting, is part of the\nongoing work. In a first approach, we use the location- and\nactivity-context of the group members to determine a meeting.\nWhenever more than 50 % of the members of a group are\navailable at a location, where a meeting is considered to make\nsense (e.g. not on a corridor), a meeting minutes post-it is created\nat this location and all absent group members are notified of the\nmeeting and the location it takes place.\nDuring the meeting, the comment-feature of virtual post-its\nprovides a means to take notes for all of the participants. When\nmembers are joining or leaving the meeting, this is automatically\nadded as a note to the list of comments.\nLike the recognition of the beginning of a meeting, the\nrecognition of its end is still part of ongoing work. If the end of\nthe meeting is recognized, all group members get the complete list\nof comments as a meeting protocol at the end of the meeting.\n5. CONCLUSIONS\nThis paper discussed the potentials of support for group\ninteraction by using context information. First, we introduced the\nnotions of context and context computing and motivated their\nvalue for supporting group interaction.\nAn architecture is presented to support context-aware group\ninteraction in mobile, distributed environments. It is built upon a\nflexible and extensible framework, thus enabling an easy adoption\nto available context sources (e.g. by adding additional sensors) as\nwell as the required form of representation.\nWe have prototypically developed a set of services, which\nenhance group interaction by taking into account the current\ncontext of the users as well as the context of groups itself.\nImportant features are dynamic formation of groups, visualization\nof location on a two-dimensional map as well as unobtrusively\nintegrated in an instant-messenger, asynchronous communication\nby virtual post-its, which are bound to certain locations, and a\ncontext-aware availability-management, which adapts the\navailability-status of a user to his current situation.\nTo provide location information, we have implemented a\nsubsystem for automated acquisition of location- and\nproximityinformation provided by various sensors, which provides a\ntechnology-independent presentation of locations and spatial\nproximities between users and merges this information using\nsensor-independent fusion algorithms. A history of locations as\nwell as of spatial proximities is stored in a database, thus enabling\ncontext history-based services.\n6. REFERENCES\n[1] Beer, W., Christian, V., Ferscha, A., Mehrmann, L.\nModeling Context-aware Behavior by Interpreted ECA\nRules. In Proceedings of the International Conference on\nParallel and Distributed Computing (EUROPAR\"03).\n(Klagenfurt, Austria, August 26-29, 2003). Springer Verlag,\nLNCS 2790, 1064-1073.\n[2] Brown, P.J., Bovey, J.D., Chen X. Context-Aware\nApplications: From the Laboratory to the Marketplace.\nIEEE Personal Communications, 4(5) (1997), 58-64.\n[3] Chen, H., Kotz, D. A Survey of Context-Aware Mobile\nComputing Research. Technical Report TR2000-381,\nComputer Science Department, Dartmouth College,\nHanover, New Hampshire, November 2000.\n[4] Dey, A. Providing Architectural Support for Building\nContext-Aware Applications. Ph.D. Thesis, Department of\nComputer Science, Georgia Institute of Technology,\nAtlanta, November 2000.\n[5] Svetlana Domnitcheva. Location Modeling: State of the Art\nand Challenges. In Proceedings of the Workshop on\nLocation Modeling for Ubiquitous Computing. (Atlanta,\nGeorgia, United States, September 30, 2001). 13-19.\n[6] Ferscha, A. Workspace Awareness in Mobile Virtual Teams.\nIn Proceedings of the IEEE 9th\nInternational Workshop on\nEnabling Technologies: Infrastructure for Collaborative\nEnterprises (WETICE\"00). (Gaithersburg, Maryland, March\n14-16, 2000). IEEE Computer Society Press, 272-277.\n[7] Ferscha, A. Coordination in Pervasive Computing\nEnvironments. In Proceedings of the Twelfth International\nIEEE Workshop on Enabling Technologies: Infrastructure\nfor Collaborative Enterprises (WETICE-2003). (June 9-11,\n2003). IEEE Computer Society Press, 3-9.\n[8] Leonhard, U. Supporting Location Awareness in Open\nDistributed Systems. Ph.D. Thesis, Department of\nComputing, Imperial College, London, May 1998.\n[9] Ryan, N., Pascoe, J., Morse, D. Enhanced Reality\nFieldwork: the Context-Aware Archaeological Assistant.\nGaffney, V., Van Leusen, M., Exxon, S. (eds.) Computer\nApplications in Archaeology (1997)\n[10] Schilit, B.N., Theimer, M. Disseminating Active Map\nInformation to Mobile Hosts. IEEE Network, 8(5) (1994),\n22-32.\n[11] Schilit, B.N. A System Architecture for Context-Aware\nMobile Computing. Ph.D. Thesis, Columbia University,\nDepartment of Computer Science, May 1995.\n[12] Wang, B., Bodily, J., Gupta, S.K.S. Supporting Persistent\nSocial Groups in Ubiquitous Computing Environments\nUsing Context-Aware Ephemeral Group Service. In\nProceedings of the Second IEEE International Conference\non Pervasive Computing and Communications\n(PerCom\"04). (March 14-17, 2004). IEEE Computer Society\nPress, 287-296.\n[13] Pascoe, J. The Stick-e Note Architecture: Extending the\nInterface Beyond the User. Proceedings of the 2nd\nInternational Conference of Intelligent User Interfaces\n(IUI\"97). (Orlando, USA, 1997), 261-264.\n[14] Dey, A., Abowd, G. CybreMinder: A Context-Aware System\nfor Supporting Re-minders. Proceedings of the 2nd\nInternational Symposium on Handheld and Ubiquitous\nComputing (HUC\"00). (Bristol, UK, 2000), 172-186.\n97", "keywords": "software framework;xml configuration file;context awareness;location sense;event-condition-action;fifth contextdimension group-context;contextaware;group interaction;sensor fusion;mobility system"} {"name": "train_C-56", "title": "A Hierarchical Process Execution Support for Grid Computing", "abstract": "Grid is an emerging infrastructure used to share resources among virtual organizations in a seamless manner and to provide breakthrough computing power at low cost. Nowadays there are dozens of academic and commercial products that allow execution of isolated tasks on grids, but few products support the enactment of long-running processes in a distributed fashion. In order to address such subject, this paper presents a programming model and an infrastructure that hierarchically schedules process activities using available nodes in a wide grid environment. Their advantages are automatic and structured distribution of activities and easy process monitoring and steering.", "fulltext": "1. INTRODUCTION\nGrid computing is a model for wide-area distributed and\nparallel computing across heterogeneous networks in\nmultiple administrative domains. This research field aims to\npromote sharing of resources and provides breakthrough\ncomputing power over this wide network of virtual\norganizations in a seamless manner [8]. Traditionally, as in Globus\n[6], Condor-G [9] and Legion [10], there is a minimal\ninfrastructure that provides data resource sharing, computational\nresource utilization management, and distributed execution.\nSpecifically, considering distributed execution, most of the\nexisting grid infrastructures supports execution of isolated\ntasks, but they do not consider their task interdependencies\nas in processes (workflows) [12]. This deficiency restricts\nbetter scheduling algorithms, distributed execution\ncoordination and automatic execution recovery.\nThere are few proposed middleware infrastructures that\nsupport process execution over the grid. In general, they\nmodel processes by interconnecting their activities through\ncontrol and data dependencies. Among them, WebFlow\n[1] emphasizes an architecture to construct distributed\nprocesses; Opera-G [3] provides execution recovering and\nsteering, GridFlow [5] focuses on improved scheduling algorithms\nthat take advantage of activity dependencies, and SwinDew\n[13] supports totally distributed execution on peer-to-peer\nnetworks. However, such infrastructures contain\nscheduling algorithms that are centralized by process [1, 3, 5], or\ncompletely distributed, but difficult to monitor and control\n[13].\nIn order to address such constraints, this paper proposes a\nstructured programming model for process description and a\nhierarchical process execution infrastructure. The\nprogramming model employs structured control flow to promote\ncontrolled and contextualized activity execution.\nComplementary, the support infrastructure, which executes a process\nspecification, takes advantage of the hierarchical structure\nof a specified process in order to distribute and schedule\nstrong dependent activities as a unit, allowing a better\nexecution performance and fault-tolerance and providing\nlocalized communication.\nThe programming model and the support infrastructure,\nnamed X avantes, are under implementation in order to show\nthe feasibility of the proposed model and to demonstrate its\ntwo major advantages: to promote widely distributed\nprocess execution and scheduling, but in a controlled,\nstructured and localized way.\nNext Section describes the programming model, and\nSection 3, the support infrastructure for the proposed grid\ncomputing model. Section 4 demonstrates how the support\ninfrastructure executes processes and distributes activities.\nRelated works are presented and compared to the proposed\nmodel in Section 5. The last Section concludes this paper\nencompassing the advantages of the proposed hierarchical\nprocess execution support for the grid computing area and\nlists some future works.\n87 Middleware 2004 Companion\nProcessElement\nProcess Activity Controller\n1\n*\n1\n*\nFigure 1: High-level framework of the programming\nmodel\n2. PROGRAMMING MODEL\nThe programming model designed for the grid computing\narchitecture is very similar to the specified to the Business\nProcess Execution Language (BPEL) [2]. Both describe\nprocesses in XML [4] documents, but the former specifies\nprocesses strictly synchronous and structured, and has more\nconstructs for structured parallel control. The rationale\nbehind of its design is the possibility of hierarchically distribute\nthe process control and coordination based on structured\nconstructs, differently from BPEL, which does not allow\nhierarchical composition of processes.\nIn the proposed programming model, a process is a set of\ninterdependent activities arranged to solve a certain\nproblem. In detail, a process is composed of activities,\nsubprocesses, and controllers (see Figure 1). Activities represent\nsimple tasks that are executed on behalf of a process;\nsubprocesses are processes executed in the context of a\nparent process; and controllers are control elements used to\nspecify the execution order of these activities and\nsubprocesses. Like structured languages, controllers can be nested\nand then determine the execution order of other controllers.\nData are exchanged among process elements through\nparameters. They are passed by value, in case of simple\nobjects, or by reference, if they are remote objects shared\namong elements of the same controller or process. External\ndata can be accessed through data sources, such as relational\ndatabases or distributed objects.\n2.1 Controllers\nControllers are structured control constructs used to\ndefine the control flow of processes. There are sequential and\nparallel controllers.\nThe sequential controller types are: block, switch, for\nand while. The block controller is a simple sequential\nconstruct, and the others mimic equivalent structured\nprogramming language constructs. Similarly, the parallel types are:\npar, parswitch, parfor and parwhile. They extend the\nrespective sequential counterparts to allow parallel execution\nof process elements.\nAll parallel controller types fork the execution of one or\nmore process elements, and then, wait for each execution to\nfinish. Indeed, they contain a fork and a join of execution.\nAiming to implement a conditional join, all parallel\ncontroller types contain an exit condition, evaluated all time\nthat an element execution finishes, in order to determine\nwhen the controller must end.\nThe parfor and parwhile are the iterative versions of\nthe parallel controller types. Both fork executions while\nthe iteration condition is true. This provides flexibility to\ndetermine, at run-time, the number of process elements to\nexecute simultaneously.\nWhen compared to workflow languages, the parallel\ncontroller types represent structured versions of the workflow\ncontrol constructors, because they can nest other controllers\nand also can express fixed and conditional forks and joins,\npresent in such languages.\n2.2 Process Example\nThis section presents an example of a prime number search\napplication that receives a certain range of integers and\nreturns a set of primes contained in this range. The whole\ncomputation is made by a process, which uses a parallel\ncontroller to start and dispatch several concurrent activities\nof the same type, in order to find prime numbers. The\nportion of the XML document that describes the process and\nactivity types is shown below.\n\n\n\n\n\n\n\nsetPrimes(new RemoteHashSet());\nparfor.setMin(getMin());\nparfor.setMax(getMax());\nparfor.setNumPrimes(getNumPrimes());\nparfor.setNumActs(getNumActs());\nparfor.setPrimes(getPrimes());\nparfor.setCounterBegin(0);\nparfor.setCounterEnd(getNumActs()-1);\n\n\n\n\n\n\n\n\n\nint range=\n(getMax()-getMin()+1)/getNumActs();\nint minNum = range*getCounter()+getMin();\nint maxNum = minNum+range-1;\nif (getCounter() == getNumActs()-1)\nmaxNum = getMax();\nfindPrimes.setMin(minNum);\nfindPrimes.setMax(maxNum);\nfindPrimes.setNumPrimes(getNumPrimes());\nfindPrimes.setPrimes(getPrimes());\n\n\n\n\n\n\n\nMiddleware for Grid Computing 88\n\n\n\n\n\n\nfor (int num=getMin(); num<=getMax(); num++) {\n// stop, required number of primes was found\nif (primes.size() >= getNumPrimes())\nbreak;\nboolean prime = true;\nfor (int i=2; i\n\nFirstly, a process type that finds prime numbers, named\nFindPrimes, is defined. It receives, through its input\nparameters, a range of integers in which prime numbers have\nto be found, the number of primes to be returned, and the\nnumber of activities to be executed in order to perform this\nwork. At the end, the found prime numbers are returned as\na collection through its output parameter.\nThis process contains a PARFOR controller aiming to\nexecute a determined number of parallel activities. It iterates\nfrom 0 to getNumActs() - 1, which determines the number\nof activities, starting a parallel activity in each iteration. In\nsuch case, the controller divides the whole range of numbers\nin subranges of the same size, and, in each iteration, starts a\nparallel activity that finds prime numbers in a specific\nsubrange. These activities receive a shared object by reference\nin order to store the prime numbers just found and control\nif the required number of primes has been reached.\nFinally, it is defined the activity type, FindPrimes, used\nto find prime numbers in each subrange. It receives, through\nits input parameters, the range of numbers in which it has\nto find prime numbers, the total number of prime numbers\nto be found by the whole process, and, passed by reference,\na collection object to store the found prime numbers.\nBetween its CODE markers, there is a simple code to find prime\nnumbers, which iterates over the specified range and\nverifies if the current integer is a prime. Additionally, in each\niteration, the code verifies if the required number of primes,\ninserted in the primes collection by all concurrent activities,\nhas been reached, and exits if true.\nThe advantage of using controllers is the possibility of the\nsupport infrastructure determines the point of execution the\nprocess is in, allowing automatic recovery and monitoring,\nand also the capability of instantiating and dispatching\nprocess elements only when there are enough computing\nresources available, reducing unnecessary overhead. Besides,\ndue to its structured nature, they can be easily composed\nand the support infrastructure can take advantage of this\nin order to distribute hierarchically the nested controllers to\nGroup Server\nGroup\nJava Virtual Machine\nRMI JDBC\nGroup Manager\nProcess Server\nJava Virtual Machine\nRMI JDBC\nProcess Coordinator\nWorker\nJava Virtual Machine\nRMI\nActivity Manager\nRepository\nFigure 2: Infrastructure architecture\ndifferent machines over the grid, allowing enhanced\nscalability and fault-tolerance.\n3. SUPPORT INFRASTRUCTURE\nThe support infrastructure comprises tools for\nspecification, and services for execution and monitoring of\nstructured processes in highly distributed, heterogeneous and\nautonomous grid environments. It has services to monitor\navailability of resources in the grid, to interpret processes\nand schedule activities and controllers, and to execute\nactivities.\n3.1 Infrastructure Architecture\nThe support infrastructure architecture is composed of\ngroups of machines and data repositories, which preserves\nits administrative autonomy. Generally, localized machines\nand repositories, such as in local networks or clusters, form\na group. Each machine in a group must have a Java Virtual\nMachine (JVM) [11], and a Java Runtime Library, besides\na combination of the following grid support services: group\nmanager (GM), process coordinator (PC) and activity\nmanager (AM). This combination determines what kind of group\nnode it represents: a group server, a process server, or\nsimply a worker (see Figure 2).\nIn a group there are one or more group managers, but\nonly one acts as primary and the others, as replicas. They\nare responsible to maintain availability information of group\nmachines. Moreover, group managers maintain references to\ndata resources of the group. They use group repositories to\npersist and recover the location of nodes and their\navailability.\nTo control process execution, there are one or more\nprocess coordinators per group. They are responsible to\ninstantiate and execute processes and controllers, select resources,\nand schedule and dispatch activities to workers. In order\nto persist and recover process execution and data, and also\nload process specification, they use group repositories.\nFinally, in several group nodes there is an activity\nmanager. It is responsible to execute activities in the hosted\nmachine on behalf of the group process coordinators, and to\ninform the current availability of the associated machine to\ngroup managers. They also have pendent activity queues,\ncontaining activities to be executed.\n3.2 Inter-group Relationships\nIn order to model real grid architecture, the infrastructure\nmust comprise several, potentially all, local networks, like\nInternet does. Aiming to satisfy this intent, local groups are\n89 Middleware 2004 Companion\nGM\nGM\nGM\nGM\nFigure 3: Inter-group relationships\nconnected to others, directly or indirectly, through its group\nmanagers (see Figure 3).\nEach group manager deals with requests of its group\n(represented by dashed ellipses), in order to register local\nmachines and maintain correspondent availability.\nAdditionally, group managers communicate to group managers of\nother groups. Each group manager exports coarse\navailability information to group managers of adjacent groups and\nalso receives requests from other external services to\nfurnish detailed availability information. In this way, if there\nare resources available in external groups, it is possible to\nsend processes, controllers and activities to these groups in\norder to execute them in external process coordinators and\nactivity managers, respectively.\n4. PROCESS EXECUTION\nIn the proposed grid architecture, a process is specified\nin XML, using controllers to determine control flow;\nreferencing other processes and activities; and passing objects to\ntheir parameters in order to define data flow. After specified,\nthe process is compiled in a set of classes, which represent\nspecific process, activity and controller types. At this time,\nit can be instantiated and executed by a process coordinator.\n4.1 Dynamic Model\nTo execute a specified process, it must be instantiated by\nreferencing its type on a process coordinator service of a\nspecific group. Also, the initial parameters must be passed\nto it, and then it can be started.\nThe process coordinator carries out the process by\nexecuting the process elements included in its body sequentially.\nIf the element is a process or a controller, the process\ncoordinator can choose to execute it in the same machine or to\npass it to another process coordinator in a remote machine,\nif available. Else, if the element is an activity, it passes to\nan activity manager of an available machine.\nProcess coordinators request the local group manager to\nfind available machines that contain the required service,\nprocess coordinator or activity manager, in order to\nexecute a process element. Then, it can return a local\nmachine, a machine in another group or none, depending on\nthe availability of such resource in the grid. It returns an\nexternal worker (activity manager machine) if there are no\navailable workers in the local group; and, it returns an\nexternal process server (process coordinator machine), if there\nare no available process servers or workers in the local group.\nObeying this rule, group managers try to find process servers\nin the same group of the available workers.\nSuch procedure is followed recursively by all process\ncoGM\nFindPrimes\nActivity\nAM\nFindPrimes\nActivity\nAM\nFindPrimes\nActivity\nAM\nFindPrimes\nProcess\nPC\nFigure 4: FindPrimes process execution\nordinators that execute subprocesses or controllers of a\nprocess. Therefore, because processes are structured by\nnesting process elements, the process execution is automatically\ndistributed hierarchically through one or more grid groups\naccording to the availability and locality of computing\nresources.\nThe advantage of this distribution model is wide area\nexecution, which takes advantage of potentially all grid\nresources; and localized communication of process elements,\nbecause strong dependent elements, which are under the\nsame controller, are placed in the same or near groups.\nBesides, it supports easy monitoring and steering, due to its\nstructured controllers, which maintain state and control over\nits inner elements.\n4.2 Process Execution Example\nRevisiting the example shown in Section 2.2, a process\ntype is specified to find prime numbers in a certain range of\nnumbers. In order to solve this problem, it creates a number\nof activities using the parfor controller. Each activity, then,\nfinds primes in a determined part of the range of numbers.\nFigure 4 shows an instance of this process type executing\nover the proposed infrastructure. A FindPrimes process\ninstance is created in an available process coordinator (PC),\nwhich begins executing the parfor controller. In each\niteration of this controller, the process coordinator requests\nto the group manager (GM) an available activity manager\n(AM) in order to execute a new instance of the FindPrimes\nactivity. If there is any AM available in this group or in an\nexternal one, the process coordinator sends the activity class\nand initial parameters to this activity manager and requests\nits execution. Else, if no activity manager is available, then\nthe controller enters in a wait state until an activity manager\nis made available, or is created.\nIn parallel, whenever an activity finishes, its result is sent\nback to the process coordinator, which records it in the\nparfor controller. Then, the controller waits until all\nactivities that have been started are finished, and it ends. At\nthis point, the process coordinator verifies that there is no\nother process element to execute and finishes the process.\n5. RELATED WORK\nThere are several academic and commercial products that\npromise to support grid computing, aiming to provide\ninterfaces, protocols and services to leverage the use of widely\nMiddleware for Grid Computing 90\ndistributed resources in heterogeneous and autonomous\nnetworks. Among them, Globus [6], Condor-G [9] and Legion\n[10] are widely known. Aiming to standardize interfaces\nand services to grid, the Open Grid Services Architecture\n(OGSA) [7] has been defined.\nThe grid architectures generally have services that\nmanage computing resources and distribute the execution of\nindependent tasks on available ones. However, emerging\narchitectures maintain task dependencies and automatically\nexecute tasks in a correct order. They take advantage of\nthese dependencies to provide automatic recovery, and\nbetter distribution and scheduling algorithms.\nFollowing such model, WebFlow [1] is a process\nspecification tool and execution environment constructed over\nCORBA that allows graphical composition of activities and\ntheir distributed execution in a grid environment. Opera-G\n[3], like WebFlow, uses a process specification language\nsimilar to the data flow diagram and workflow languages, but\nfurnishes automatic execution recovery and limited steering\nof process execution.\nThe previously referred architectures and others that\nenact processes over the grid have a centralized coordination.\nIn order to surpass this limitation, systems like SwinDew [13]\nproposed a widely distributed process execution, in which\neach node knows where to execute the next activity or join\nactivities in a peer-to-peer environment.\nIn the specific area of activity distribution and scheduling,\nemphasized in this work, GridFlow [5] is remarkable. It uses\na two-level scheduling: global and local. In the local level,\nit has services that predict computing resource utilization\nand activity duration. Based on this information, GridFlow\nemploys a PERT-like technique that tries to forecast the\nactivity execution start time and duration in order to better\nschedule them to the available resources.\nThe architecture proposed in this paper, which\nencompasses a programming model and an execution support\ninfrastructure, is widely decentralized, differently from WebFlow\nand Opera-G, being more scalable and fault-tolerant. But,\nlike the latter, it is designed to support execution recovery.\nComparing to SwinDew, the proposed architecture\ncontains widely distributed process coordinators, which\ncoordinate processes or parts of them, differently from SwinDew\nwhere each node has a limited view of the process: only the\nactivity that starts next. This makes easier to monitor and\ncontrol processes.\nFinally, the support infrastructure breaks the process and\nits subprocesses for grid execution, allowing a group to\nrequire another group for the coordination and execution of\nprocess elements on behalf of the first one. This is\ndifferent from GridFlow, which can execute a process in at most\ntwo levels, having the global level as the only responsible to\nschedule subprocesses in other groups. This can limit the\noverall performance of processes, and make the system less\nscalable.\n6. CONCLUSION AND FUTURE WORK\nGrid computing is an emerging research field that intends\nto promote distributed and parallel computing over the wide\narea network of heterogeneous and autonomous\nadministrative domains in a seamless way, similar to what Internet\ndoes to the data sharing. There are several products that\nsupport execution of independent tasks over grid, but only a\nfew supports the execution of processes with interdependent\ntasks.\nIn order to address such subject, this paper proposes a\nprogramming model and a support infrastructure that\nallow the execution of structured processes in a widely\ndistributed and hierarchical manner. This support\ninfrastructure provides automatic, structured and recursive\ndistribution of process elements over groups of available machines;\nbetter resource use, due to its on demand creation of\nprocess elements; easy process monitoring and steering, due to\nits structured nature; and localized communication among\nstrong dependent process elements, which are placed under\nthe same controller. These features contribute to better\nscalability, fault-tolerance and control for processes execution\nover the grid. Moreover, it opens doors for better scheduling\nalgorithms, recovery mechanisms, and also, dynamic\nmodification schemes.\nThe next work will be the implementation of a recovery\nmechanism that uses the execution and data state of\nprocesses and controllers to recover process execution. After\nthat, it is desirable to advance the scheduling algorithm to\nforecast machine use in the same or other groups and to\nforesee start time of process elements, in order to use this\ninformation to pre-allocate resources and, then, obtain a\nbetter process execution performance. Finally, it is\ninteresting to investigate schemes of dynamic modification of\nprocesses over the grid, in order to evolve and adapt long-term\nprocesses to the continuously changing grid environment.\n7. ACKNOWLEDGMENTS\nWe would like to thank Paulo C. Oliveira, from the State\nTreasury Department of Sao Paulo, for its deeply revision\nand insightful comments.\n8. REFERENCES\n[1] E. Akarsu, G. C. Fox, W. Furmanski, and T. Haupt.\nWebFlow: High-Level Programming Environment and\nVisual Authoring Toolkit for High Performance\nDistributed Computing. In Proceedings of\nSupercom puting (SC98), 1998.\n[2] T. Andrews and F. Curbera. Specification: Business\nProcess Execution Language for W eb Services V ersion\n1.1. IBM DeveloperWorks, 2003. Available at\n\nhttp://www-106.ibm.com/developerworks/library/wsbpel.\n[3] W. Bausch. O PERA -G :A M icrokernelfor\nCom putationalG rids. PhD thesis, Swiss Federal\nInstitute of Technology, Zurich, 2004.\n[4] T. Bray and J. Paoli. Extensible M arkup Language\n(X M L) 1.0. XML Core WG, W3C, 2004. Available at\nhttp://www.w3.org/TR/2004/REC-xml-20040204.\n[5] J. Cao, S. A. Jarvis, S. Saini, and G. R. Nudd.\nGridFlow: Workflow Management for Grid\nComputing. In Proceedings ofthe International\nSym posium on Cluster Com puting and the G rid\n(CCG rid 2003), 2003.\n[6] I. Foster and C. Kesselman. Globus: A\nMetacomputing Infrastructure Toolkit. Intl.J.\nSupercom puter A pplications, 11(2):115-128, 1997.\n[7] I. Foster, C. Kesselman, J. M. Nick, and S. Tuecke.\nThe Physiology ofthe G rid: A n O pen G rid Services\nA rchitecture for D istributed System s Integration.\n91 Middleware 2004 Companion\nOpen Grid Service Infrastructure WG, Global Grid\nForum, 2002.\n[8] I. Foster, C. Kesselman, and S. Tuecke. The Anatomy\nof the Grid: Enabling Scalable Virtual Organization.\nThe Intl.JournalofH igh Perform ance Com puting\nA pplications, 15(3):200-222, 2001.\n[9] J. Frey, T. Tannenbaum, M. Livny, I. Foster, and\nS. Tuecke. Condor-G: A Computational Management\nAgent for Multi-institutional Grids. In Proceedings of\nthe Tenth Intl.Sym posium on H igh Perform ance\nD istributed Com puting (H PD C-10). IEEE, 2001.\n[10] A. S. Grimshaw and W. A. Wulf. Legion - A View\nfrom 50,000 Feet. In Proceedings ofthe Fifth Intl.\nSym posium on H igh Perform ance D istributed\nCom puting. IEEE, 1996.\n[11] T. Lindholm and F. Yellin. The Java V irtualM achine\nSpecification. Sun Microsystems, Second Edition\nedition, 1999.\n[12] B. R. Schulze and E. R. M. Madeira. Grid Computing\nwith Active Services. Concurrency and Com putation:\nPractice and Experience Journal, 5(16):535-542, 2004.\n[13] J. Yan, Y. Yang, and G. K. Raikundalia. Enacting\nBusiness Processes in a Decentralised Environment\nwith P2P-Based Workflow Support. In Proceedings of\nthe Fourth Intl.Conference on W eb-Age Inform ation\nM anagem ent(W A IM 2003), 2003.\nMiddleware for Grid Computing 92", "keywords": "distributed computing;parallel computing;distribute middleware;parallel execution;grid computing;process support;hierarchical process execution;grid architecture;process execution;distributed system;process description;distributed scheduling;scheduling algorithm;distributed application;distributed process;distributed execution"} {"name": "train_C-57", "title": "Congestion Games with Load-Dependent Failures: Identical Resources", "abstract": "We define a new class of games, congestion games with loaddependent failures (CGLFs), which generalizes the well-known class of congestion games, by incorporating the issue of resource failures into congestion games. In a CGLF, agents share a common set of resources, where each resource has a cost and a probability of failure. Each agent chooses a subset of the resources for the execution of his task, in order to maximize his own utility. The utility of an agent is the difference between his benefit from successful task completion and the sum of the costs over the resources he uses. CGLFs possess two novel features. It is the first model to incorporate failures into congestion settings, which results in a strict generalization of congestion games. In addition, it is the first model to consider load-dependent failures in such framework, where the failure probability of each resource depends on the number of agents selecting this resource. Although, as we show, CGLFs do not admit a potential function, and in general do not have a pure strategy Nash equilibrium, our main theorem proves the existence of a pure strategy Nash equilibrium in every CGLF with identical resources and nondecreasing cost functions.", "fulltext": "1. INTRODUCTION\nWe study the effects of resource failures in congestion\nsettings. This study is motivated by a variety of situations\nin multi-agent systems with unreliable components, such as\nmachines, computers etc. We define a model for congestion\ngames with load-dependent failures (CGLFs) which provides\nsimple and natural description of such situations. In this\nmodel, we are given a finite set of identical resources (service\nproviders) where each element possesses a failure\nprobability describing the probability of unsuccessful completion of\nits assigned tasks as a (nondecreasing) function of its\ncongestion. There is a fixed number of agents, each having\na task which can be carried out by any of the resources.\nFor reliability reasons, each agent may decide to assign his\ntask, simultaneously, to a number of resources. Thus, the\ncongestion on the resources is not known in advance, but\nis strategy-dependent. Each resource is associated with a\ncost, which is a (nonnegative) function of the congestion\nexperienced by this resource. The objective of each agent is to\nmaximize his own utility, which is the difference between his\nbenefit from successful task completion and the sum of the\ncosts over the set of resources he uses. The benefits of the\nagents from successful completion of their tasks are allowed\nto vary across the agents.\nThe resource cost function describes the cost suffered by\nan agent for selecting that resource, as a function of the\nnumber of agents who have selected it. Thus, it is natural\nto assume that these functions are nonnegative. In addition,\nin many real-life applications of our model the resource cost\nfunctions have a special structure. In particular, they can\nmonotonically increase or decrease with the number of the\nusers, depending on the context. The former case is\nmotivated by situations where high congestion on a resource\ncauses longer delay in its assigned tasks execution and as\na result, the cost of utilizing this resource might be higher.\nA typical example of such situation is as follows. Assume\nwe need to deliver an important package. Since there is no\nguarantee that a courier will reach the destination in time,\nwe might send several couriers to deliver the same package.\nThe time required by each courier to deliver the package\nincreases with the congestion on his way. In addition, the\npayment to a courier is proportional to the time he spends\nin delivering the package. Thus, the payment to the courier\nincreases when the congestion increases. The latter case\n(decreasing cost functions) describes situations where a group\nof agents using a particular resource have an opportunity to\nshare its cost among the group\"s members, or, the cost of\n210\nusing a resource decreases with the number of users,\naccording to some marketing policy.\nOur results\nWe show that CGLFs and, in particular, CGLFs with\nnondecreasing cost functions, do not admit a\npotential function. Therefore, the CGLF model can not be\nreduced to congestion games. Nevertheless, if the\nfailure probabilities are constant (do not depend on the\ncongestion) then a potential function is guaranteed to\nexist.\nWe show that CGLFs and, in particular, CGLFs with\ndecreasing cost functions, do not possess pure\nstrategy Nash equilibria. However, as we show in our main\nresult, there exists a pure strategy Nash\nequilibrium in any CGLF with nondecreasing cost\nfunctions.\nRelated work\nOur model extends the well-known class of congestion games\n[11]. In a congestion game, every agent has to choose from a\nfinite set of resources, where the utility (or cost) of an agent\nfrom using a particular resource depends on the number of\nagents using it, and his total utility (cost) is the sum of\nthe utilities (costs) obtained from the resources he uses. An\nimportant property of these games is the existence of pure\nstrategy Nash equilibria. Monderer and Shapley [9]\nintroduced the notions of potential function and potential game\nand proved that the existence of a potential function implies\nthe existence of a pure strategy Nash equilibrium. They\nobserved that Rosenthal [11] proved his theorem on\ncongestion games by constructing a potential function (hence,\nevery congestion game is a potential game). Moreover, they\nshowed that every finite potential game is isomorphic to a\ncongestion game; hence, the classes of finite potential games\nand congestion games coincide.\nCongestion games have been extensively studied and\ngeneralized. In particular, Leyton-Brown and Tennenholtz [5]\nextended the class of congestion games to the class of\nlocaleffect games. In a local-effect game, each agent\"s payoff is\neffected not only by the number of agents who have chosen\nthe same resources as he has chosen, but also by the number\nof agents who have chosen neighboring resources (in a given\ngraph structure). Monderer [8] dealt with another type of\ngeneralization of congestion games, in which the resource\ncost functions are player-specific (PS-congestion games). He\ndefined PS-congestion games of type q (q-congestion games),\nwhere q is a positive number, and showed that every game\nin strategic form is a q-congestion game for some q.\nPlayerspecific resource cost functions were discussed for the first\ntime by Milchtaich [6]. He showed that simple and\nstrategysymmetric PS-congestion games are not potential games,\nbut always possess a pure strategy Nash equilibrium.\nPScongestion games were generalized to weighted congestion\ngames [6] (or, ID-congestion games [7]), in which the\nresource cost functions are not only player-specific, but also\ndepend on the identity of the users of the resource.\nAckermann et al. [1] showed that weighted congestion games\nadmit pure strategy Nash equilibria if the strategy space of\neach player consists of the bases of a matroid on the set of\nresources.\nMuch of the work on congestion games has been inspired\nby the fact that every such game has a pure strategy Nash\nequilibrium. In particular, Fabrikant et al. [3] studied\nthe computational complexity of finding pure strategy Nash\nequilibria in congestion games. Intensive study has also\nbeen devoted to quantify the inefficiency of equilibria in\ncongestion games. Koutsoupias and Papadimitriou [4]\nproposed the worst-case ratio of the social welfare achieved\nby a Nash equilibrium and by a socially optimal strategy\nprofile (dubbed the price of anarchy) as a measure of the\nperformance degradation caused by lack of coordination.\nChristodoulou and Koutsoupias [2] considered the price of\nanarchy of pure equilibria in congestion games with linear\ncost functions. Roughgarden and Tardos [12] used this\napproach to study the cost of selfish routing in networks with\na continuum of users.\nHowever, the above settings do not take into\nconsideration the possibility that resources may fail to execute their\nassigned tasks. In the computer science context of\ncongestion games, where the alternatives of concern are machines,\ncomputers, communication lines etc., which are obviously\nprone to failures, this issue should not be ignored.\nPenn, Polukarov and Tennenholtz were the first to\nincorporate the issue of failures into congestion settings [10].\nThey introduced a class of congestion games with failures\n(CGFs) and proved that these games, while not being\nisomorphic to congestion games, always possess Nash equilibria\nin pure strategies. The CGF-model significantly differs from\nours. In a CGF, the authors considered the delay associated\nwith successful task completion, where the delay for an agent\nis the minimum of the delays of his successful attempts and\nthe aim of each agent is to minimize his expected delay. In\ncontrast with the CGF-model, in our model we consider the\ntotal cost of the utilized resources, where each agent wishes\nto maximize the difference between his benefit from a\nsuccessful task completion and the sum of his costs over the\nresources he uses.\nThe above differences imply that CGFs and CGLFs\npossess different properties. In particular, if in our model the\nresource failure probabilities were constant and known in\nadvance, then a potential function would exist. This, however,\ndoes not hold for CGFs; in CGFs, the failure probabilities\nare constant but there is no potential function.\nFurthermore, the procedures proposed by the authors in [10] for\nthe construction of a pure strategy Nash equilibrium are\nnot valid in our model, even in the simple, agent-symmetric\ncase, where all agents have the same benefit from successful\ncompletion of their tasks.\nOur work provides the first model of congestion settings\nwith resource failures, which considers the sum of\ncongestiondependent costs over utilized resources, and therefore, does\nnot extend the CGF-model, but rather generalizes the classic\nmodel of congestion games. Moreover, it is the first model\nto consider load-dependent failures in the above context.\n211\nOrganization\nThe rest of the paper is organized as follows. In Section 2\nwe define our model. In Section 3 we present our results.\nIn 3.1 we show that CGLFs, in general, do not have pure\nstrategy Nash equilibria. In 3.2 we focus on CGLFs with\nnondecreasing cost functions (nondecreasing CGLFs). We\nshow that these games do not admit a potential function.\nHowever, in our main result we show the existence of pure\nstrategy Nash equilibria in nondecreasing CGLFs. Section\n4 is devoted to a short discussion. Many of the proofs are\nomitted from this conference version of the paper, and will\nappear in the full version.\n2. THE MODEL\nThe scenarios considered in this work consist of a finite set\nof agents where each agent has a task that can be carried\nout by any element of a set of identical resources (service\nproviders). The agents simultaneously choose a subset of\nthe resources in order to perform their tasks, and their aim\nis to maximize their own expected payoff, as described in\nthe sequel.\nLet N be a set of n agents (n \u2208 N), and let M be a set\nof m resources (m \u2208 N). Agent i \u2208 N chooses a\nstrategy \u03c3i \u2208 \u03a3i which is a (potentially empty) subset of the\nresources. That is, \u03a3i is the power set of the set of\nresources: \u03a3i = P(M). Given a subset S \u2286 N of the agents,\nthe set of strategy combinations of the members of S is\ndenoted by \u03a3S = \u00d7i\u2208S\u03a3i, and the set of strategy\ncombinations of the complement subset of agents is denoted by\n\u03a3\u2212S (\u03a3\u2212S = \u03a3N S = \u00d7i\u2208N S\u03a3i). The set of pure strategy\nprofiles of all the agents is denoted by \u03a3 (\u03a3 = \u03a3N ).\nEach resource is associated with a cost, c(\u00b7), and a\nfailure probability, f(\u00b7), each of which depends on the\nnumber of agents who use this resource. We assume that the\nfailure probabilities of the resources are independent. Let\n\u03c3 = (\u03c31, . . . , \u03c3n) \u2208 \u03a3 be a pure strategy profile. The\n(m-dimensional) congestion vector that corresponds to \u03c3 is\nh\u03c3\n= (h\u03c3\ne )e\u2208M , where h\u03c3\ne =\n\u02db\n\u02db{i \u2208 N : e \u2208 \u03c3i}\n\u02db\n\u02db. The\nfailure probability of a resource e is a monotone nondecreasing\nfunction f : {1, . . . , n} \u2192 [0, 1) of the congestion\nexperienced by e. The cost of utilizing resource e is a function\nc : {1, . . . , n} \u2192 R+ of the congestion experienced by e.\nThe outcome for agent i \u2208 N is denoted by xi \u2208 {S, F},\nwhere S and F, respectively, indicate whether the task\nexecution succeeded or failed. We say that the execution of\nagent\"s i task succeeds if the task of agent i is successfully\ncompleted by at least one of the resources chosen by him.\nThe benefit of agent i from his outcome xi is denoted by\nVi(xi), where Vi(S) = vi, a given (nonnegative) value, and\nVi(F) = 0.\nThe utility of agent i from strategy profile \u03c3 and his\noutcome xi, ui(\u03c3, xi), is the difference between his benefit from\nthe outcome (Vi(xi)) and the sum of the costs of the\nresources he has used:\nui(\u03c3, xi) = Vi(xi) \u2212\nX\ne\u2208\u03c3i\nc(h\u03c3\ne ) .\nThe expected utility of agent i from strategy profile \u03c3, Ui(\u03c3),\nis, therefore:\nUi(\u03c3) = 1 \u2212\nY\ne\u2208\u03c3i\nf(h\u03c3\ne )\n!\nvi \u2212\nX\ne\u2208\u03c3i\nc(h\u03c3\ne ) ,\nwhere 1 \u2212\nQ\ne\u2208\u03c3i\nf(h\u03c3\ne ) denotes the probability of successful\ncompletion of agent i\"s task. We use the convention thatQ\ne\u2208\u2205 f(h\u03c3\ne ) = 1. Hence, if agent i chooses an empty set\n\u03c3i = \u2205 (does not assign his task to any resource), then his\nexpected utility, Ui(\u2205, \u03c3\u2212i), equals zero.\n3. PURE STRATEGY NASH EQUILIBRIA\nIN CGLFS\nIn this section we present our results on CGLFs. We\ninvestigate the property of the (non-)existence of pure strategy\nNash equilibria in these games. We show that this class of\ngames does not, in general, possess pure strategy equilibria.\nNevertheless, if the resource cost functions are\nnondecreasing then such equilibria are guaranteed to exist, despite the\nnon-existence of a potential function.\n3.1 Decreasing Cost Functions\nWe start by showing that the class of CGLFs and, in\nparticular, the subclass of CGLFs with decreasing cost\nfunctions, does not, in general, possess Nash equilibria in pure\nstrategies.\nConsider a CGLF with two agents (N = {1, 2}) and two\nresources (M = {e1, e2}). The cost function of each resource\nis given by c(x) = 1\nxx , where x \u2208 {1, 2}, and the failure\nprobabilities are f(1) = 0.01 and f(2) = 0.26. The benefits\nof the agents from successful task completion are v1 = 1.1\nand v2 = 4. Below we present the payoff matrix of the game.\n\u2205 {e1} {e2} {e1, e2}\n\u2205 U1 = 0 U1 = 0 U1 = 0 U1 = 0\nU2 = 0 U2 = 2.96 U2 = 2.96 U2 = 1.9996\n{e1} U1 = 0.089 U1 = 0.564 U1 = 0.089 U1 = 0.564\nU2 = 0 U2 = 2.71 U2 = 2.96 U2 = 2.7396\n{e2} U1 = 0.089 U1 = 0.089 U1 = 0.564 U1 = 0.564\nU2 = 0 U2 = 2.96 U2 = 2.71 U2 = 2.7396\n{e1, e2} U1 = \u22120.90011 U1 = \u22120.15286 U1 = \u22120.15286 U1 = 0.52564\nU2 = 0 U2 = 2.71 U2 = 2.71 U2 = 3.2296\nTable 1: Example for non-existence of pure strategy Nash\nequilibria in CGLFs.\nIt can be easily seen that for every pure strategy profile \u03c3\nin this game there exist an agent i and a strategy \u03c3i \u2208 \u03a3i\nsuch that Ui(\u03c3\u2212i, \u03c3i) > Ui(\u03c3). That is, every pure strategy\nprofile in this game is not in equilibrium.\nHowever, if the cost functions in a given CGLF do not\ndecrease in the number of users, then, as we show in the\nmain result of this paper, a pure strategy Nash equilibrium\nis guaranteed to exist.\n212\n3.2 Nondecreasing Cost Functions\nThis section focuses on the subclass of CGLFs with\nnondecreasing cost functions (henceforth, nondecreasing CGLFs).\nWe show that nondecreasing CGLFs do not, in general,\nadmit a potential function. Therefore, these games are not\ncongestion games. Nevertheless, we prove that all such games\npossess pure strategy Nash equilibria.\n3.2.1 The (Non-)Existence of a Potential Function\nRecall that Monderer and Shapley [9] introduced the\nnotions of potential function and potential game, where\npotential game is defined to be a game that possesses a potential\nfunction. A potential function is a real-valued function over\nthe set of pure strategy profiles, with the property that the\ngain (or loss) of an agent shifting to another strategy while\nthe other agents\" strategies are kept unchanged, equals to\nthe corresponding increment of the potential function. The\nauthors [9] showed that the classes of finite potential games\nand congestion games coincide.\nHere we show that the class of CGLFs and, in particular,\nthe subclass of nondecreasing CGLFs, does not admit a\npotential function, and therefore is not included in the class of\ncongestion games. However, for the special case of constant\nfailure probabilities, a potential function is guaranteed to\nexist. To prove these statements we use the following\ncharacterization of potential games [9].\nA path in \u03a3 is a sequence \u03c4 = (\u03c30\n\u2192 \u03c31\n\u2192 \u00b7 \u00b7 \u00b7 ) such\nthat for every k \u2265 1 there exists a unique agent, say agent\ni, such that \u03c3k\n= (\u03c3k\u22121\n\u2212i , \u03c3i) for some \u03c3i = \u03c3k\u22121\ni in \u03a3i. A\nfinite path \u03c4 = (\u03c30\n\u2192 \u03c31\n\u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K\n) is closed if \u03c30\n= \u03c3K\n.\nIt is a simple closed path if in addition \u03c3l\n= \u03c3k\nfor every\n0 \u2264 l = k \u2264 K \u2212 1. The length of a simple closed path is\ndefined to be the number of distinct points in it; that is, the\nlength of \u03c4 = (\u03c30\n\u2192 \u03c31\n\u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K\n) is K.\nTheorem 1. [9] Let G be a game in strategic form with\na vector U = (U1, . . . , Un) of utility functions. For a finite\npath \u03c4 = (\u03c30\n\u2192 \u03c31\n\u2192 \u00b7 \u00b7 \u00b7 \u2192 \u03c3K\n), let U(\u03c4) =\nPK\nk=1[Uik (\u03c3k\n)\u2212\nUik (\u03c3k\u22121\n)], where ik is the unique deviator at step k. Then,\nG is a potential game if and only if U(\u03c4) = 0 for every\nsimple closed path \u03c4 of length 4.\nLoad-Dependent Failures\nBased on Theorem 1, we present the following\ncounterexample that demonstrates the non-existence of a potential\nfunction in CGLFs.\nWe consider the following agent-symmetric game G in\nwhich two agents (N = {1, 2}) wish to assign a task to two\nresources (M = {e1, e2}). The benefit from a successful task\ncompletion of each agent equals v, and the failure\nprobability function strictly increases with the congestion. Consider\nthe simple closed path of length 4 which is formed by\n\u03b1 = (\u2205, {e2}) , \u03b2 = ({e1}, {e2}) ,\n\u03b3 = ({e1}, {e1, e2}) , \u03b4 = (\u2205, {e1, e2}) :\n{e2} {e1, e2}\n\u2205 U1 = 0 U1 = 0\nU2 = (1 \u2212 f(1)) v \u2212 c(1) U2 =\n`\n1 \u2212 f(1)2\n\u00b4\nv \u2212 2c(1)\n{e1} U1 = (1 \u2212 f(1)) v \u2212 c(1) U1 = (1 \u2212 f(2)) v \u2212 c(2)\nU2 = (1 \u2212 f(1)) v \u2212 c(1) U2 = (1 \u2212 f(1)f(2)) v \u2212 c(1) \u2212 c(2)\nTable 2: Example for non-existence of potentials in CGLFs.\nTherefore,\nU1(\u03b1) \u2212 U1(\u03b2) + U2(\u03b2) \u2212 U2(\u03b3) + U1(\u03b3) \u2212 U1(\u03b4)\n+U2(\u03b4) \u2212 U2(\u03b1) = v (1 \u2212 f(1)) (f(1) \u2212 f(2)) = 0.\nThus, by Theorem 1, nondecreasing CGLFs do not\nadmit potentials. As a result, they are not congestion games.\nHowever, as presented in the next section, the special case\nin which the failure probabilities are constant, always\npossesses a potential function.\nConstant Failure Probabilities\nWe show below that CGLFs with constant failure\nprobabilities always possess a potential function. This follows from\nthe fact that the expected benefit (revenue) of each agent in\nthis case does not depend on the choices of the other agents.\nIn addition, for each agent, the sum of the costs over his\nchosen subset of resources, equals the payoff of an agent\nchoosing the same strategy in the corresponding congestion game.\nAssume we are given a game G with constant failure\nprobabilities. Let \u03c4 = (\u03b1 \u2192 \u03b2 \u2192 \u03b3 \u2192 \u03b4 \u2192 \u03b1) be an arbitrary\nsimple closed path of length 4. Let i and j denote the active\nagents (deviators) in \u03c4 and z \u2208 \u03a3\u2212{i,j} be a fixed\nstrategy profile of the other agents. Let \u03b1 = (xi, xj, z), \u03b2 =\n(yi, xj, z), \u03b3 = (yi, yj, z), \u03b4 = (xi, yj, z), where xi, yi \u2208 \u03a3i\nand xj, yj \u2208 \u03a3j. Then,\nU(\u03c4) = Ui(xi, xj, z) \u2212 Ui(yi, xj, z)\n+Uj(yi, xj, z) \u2212 Uj(yi, yj, z)\n+Ui(yi, yj, z) \u2212 Ui(xi, yj, z)\n+Uj(xi, yj, z) \u2212 Uj(xi, xj, z)\n=\n\n1 \u2212 f|xi|\n\nvi \u2212\nX\ne\u2208xi\nc(h\n(xi,xj ,z)\ne ) \u2212 . . .\n\u2212\n\n1 \u2212 f|xj |\n\nvj +\nX\ne\u2208xj\nc(h\n(xi,xj ,z)\ne )\n=\n\u00bb\n1 \u2212 f|xi|\n\nvi \u2212 . . . \u2212\n\n1 \u2212 f|xj |\n\nvj\n\n\u2212\n\u00bb X\ne\u2208xi\nc(h\n(xi,xj ,z)\ne ) \u2212 . . . \u2212\nX\ne\u2208xj\nc(h\n(xi,xj ,z)\ne )\n\n.\nNotice that\n\u00bb\n1 \u2212 f|xi|\n\nvi \u2212 . . . \u2212\n\n1 \u2212 f|xj |\n\nvj\n\n= 0, as\na sum of a telescope series. The remaining sum equals 0, by\napplying Theorem 1 to congestion games, which are known\nto possess a potential function. Thus, by Theorem 1, G is a\npotential game.\n213\nWe note that the above result holds also for the more\ngeneral settings with non-identical resources (having\ndifferent failure probabilities and cost functions) and general cost\nfunctions (not necessarily monotone and/or nonnegative).\n3.2.2 The Existence of a Pure Strategy Nash\nEquilibrium\nIn the previous section, we have shown that CGLFs and,\nin particular, nondecreasing CGLFs, do not admit a\npotential function, but this fact, in general, does not contradict\nthe existence of an equilibrium in pure strategies. In this\nsection, we present and prove the main result of this\npaper (Theorem 2) which shows the existence of pure strategy\nNash equilibria in nondecreasing CGLFs.\nTheorem 2. Every nondecreasing CGLF possesses a Nash\nequilibrium in pure strategies.\nThe proof of Theorem 2 is based on Lemmas 4, 7 and\n8, which are presented in the sequel. We start with some\ndefinitions and observations that are needed for their proofs.\nIn particular, we present the notions of A-, D- and S-stability\nand show that a strategy profile is in equilibrium if and only\nif it is A-, D- and S- stable. Furthermore, we prove the\nexistence of such a profile in any given nondecreasing CGLF.\nDefinition 3. For any strategy profile \u03c3 \u2208 \u03a3 and for any\nagent i \u2208 N, the operation of adding precisely one resource\nto his strategy, \u03c3i, is called an A-move of i from \u03c3.\nSimilarly, the operation of dropping a single resource is called a\nD-move, and the operation of switching one resource with\nanother is called an S-move.\nClearly, if agent i deviates from strategy \u03c3i to strategy \u03c3i\nby applying a single A-, D- or S-move, then max {|\u03c3i \u03c3i|,\n|\u03c3i \u03c3i|} = 1, and vice versa, if max {|\u03c3i \u03c3i|, |\u03c3i \u03c3i|} =\n1 then \u03c3i is obtained from \u03c3i by applying exactly one such\nmove. For simplicity of exposition, for any pair of sets A\nand B, let \u00b5(A, B) = max {|A B|, |B A|}.\nThe following lemma implies that any strategy profile, in\nwhich no agent wishes unilaterally to apply a single A-,\nDor S-move, is a Nash equilibrium. More precisely, we show\nthat if there exists an agent who benefits from a unilateral\ndeviation from a given strategy profile, then there exists a\nsingle A-, D- or S-move which is profitable for him as well.\nLemma 4. Given a nondecreasing CGLF, let \u03c3 \u2208 \u03a3 be a\nstrategy profile which is not in equilibrium, and let i \u2208 N\nsuch that \u2203xi \u2208 \u03a3i for which Ui(\u03c3\u2212i, xi) > Ui(\u03c3). Then,\nthere exists yi \u2208 \u03a3i such that Ui(\u03c3\u2212i, yi) > Ui(\u03c3) and \u00b5(yi, \u03c3i)\n= 1.\nTherefore, to prove the existence of a pure strategy Nash\nequilibrium, it suffices to look for a strategy profile for which\nno agent wishes to unilaterally apply an A-, D- or S-move.\nBased on the above observation, we define A-, D- and\nSstability as follows.\nDefinition 5. A strategy profile \u03c3 is said to be A-stable\n(resp., D-stable, S-stable) if there are no agents with a\nprofitable A- (resp., D-, S-) move from \u03c3. Similarly, we\ndefine a strategy profile \u03c3 to be DS-stable if there are no\nagents with a profitable D- or S-move from \u03c3.\nThe set of all DS-stable strategy profiles is denoted by\n\u03a30\n. Obviously, the profile (\u2205, . . . , \u2205) is DS-stable, so \u03a30\nis not empty. Our goal is to find a DS-stable profile for\nwhich no profitable A-move exists, implying this profile is\nin equilibrium. To describe how we achieve this, we define\nthe notions of light (heavy) resources and (nearly-) even\nstrategy profiles, which play a central role in the proof of\nour main result.\nDefinition 6. Given a strategy profile \u03c3, resource e is\ncalled \u03c3-light if h\u03c3\ne \u2208 arg mine\u2208M h\u03c3\ne and \u03c3-heavy otherwise.\nA strategy profile \u03c3 with no heavy resources will be termed\neven. A strategy profile \u03c3 satisfying |h\u03c3\ne \u2212 h\u03c3\ne | \u2264 1 for all\ne, e \u2208 M will be termed nearly-even.\nObviously, every even strategy profile is nearly-even. In\naddition, in a nearly-even strategy profile, all heavy resources\n(if exist) have the same congestion. We also observe that the\nprofile (\u2205, . . . , \u2205) is even (and DS-stable), so the subset of\neven, DS-stable strategy profiles is not empty.\nBased on the above observations, we define two types of\nan A-move that are used in the sequel. Suppose \u03c3 \u2208 \u03a30\nis a nearly-even DS-stable strategy profile. For each agent\ni \u2208 N, let ei \u2208 arg mine\u2208M \u03c3i h\u03c3\ne . That is, ei is a\nlightest resource not chosen previously by i. Then, if there\nexists any profitable A-move for agent i, then the A-move\nwith ei is profitable for i as well. This is since if agent i\nwishes to unilaterally add a resource, say a \u2208 M \u03c3i, then\nUi (\u03c3\u2212i, (\u03c3i \u222a {a})) > Ui(\u03c3). Hence,\n1 \u2212\nY\ne\u2208\u03c3i\nf(h\u03c3\ne )f(h\u03c3\na + 1)\n!\nvi \u2212\nX\ne\u2208\u03c3i\nc(h\u03c3\ne ) \u2212 c(h\u03c3\na + 1)\n> 1 \u2212\nY\ne\u2208\u03c3i\nf(h\u03c3\ne )\n!\nvi \u2212\nX\ne\u2208\u03c3i\nc(h\u03c3\ne )\n\u21d2 vi\nY\ne\u2208\u03c3i\nf(h\u03c3\ne ) >\nc(h\u03c3\na + 1)\n1 \u2212 f(h\u03c3\na + 1)\n\u2265\nc(h\u03c3\nei\n+ 1)\n1 \u2212 f(h\u03c3\nei\n+ 1)\n\u21d2 Ui (\u03c3\u2212i, (\u03c3i \u222a {ei})) > Ui(\u03c3) .\nIf no agent wishes to change his strategy in this\nmanner, i.e. Ui(\u03c3) \u2265 Ui(\u03c3\u2212i, \u03c3i \u222a{ei}) for all i \u2208 N, then by the\nabove Ui(\u03c3) \u2265 Ui(\u03c3\u2212i, \u03c3i \u222a{a}) for all i \u2208 N and a \u2208 M \u03c3i.\nHence, \u03c3 is A-stable and by Lemma 4, \u03c3 is a Nash\nequilibrium strategy profile. Otherwise, let N(\u03c3) denote the subset\nof all agents for which there exists ei such that a unilateral\naddition of ei is profitable. Let a \u2208 arg minei : i\u2208N(\u03c3) h\u03c3\nei\n. Let\nalso i \u2208 N(\u03c3) be the agent for which ei = a. If a is \u03c3-light,\nthen let \u03c3 = (\u03c3\u2212i, \u03c3i \u222a {a}). In this case we say that \u03c3 is\nobtained from \u03c3 by a one-step addition of resource a, and a\nis called an added resource. If a is \u03c3-heavy then there exists\na \u03c3-light resource b and an agent j such that a \u2208 \u03c3j and\nb /\u2208 \u03c3j. Then let \u03c3 =\n`\n\u03c3\u2212{i,j}, \u03c3i \u222a {a}, (\u03c3j {a}) \u222a {b}\n\u00b4\n.\nIn this case we say that \u03c3 is obtained from \u03c3 by a two-step\naddition of resource b, and b is called an added resource.\nWe notice that, in both cases, the congestion of each\nresource in \u03c3 is the same as in \u03c3, except for the added\nresource, for which its congestion in \u03c3 increased by 1. Thus,\nsince the added resource is \u03c3-light and \u03c3 is nearly-even, \u03c3\nis nearly-even. Then, the following lemma implies the\nSstability of \u03c3 .\n214\nLemma 7. In a nondecreasing CGLF, every nearly-even\nstrategy profile is S-stable.\nCoupled with Lemma 7, the following lemma shows that\nif \u03c3 is a nearly-even and DS-stable strategy profile, and \u03c3 is\nobtained from \u03c3 by a one- or two-step addition of resource\na, then the only potential cause for a non-DS-stability of \u03c3\nis the existence of an agent k \u2208 N with \u03c3k = \u03c3k, who wishes\nto drop the added resource a.\nLemma 8. Let \u03c3 be a nearly-even DS-stable strategy\nprofile of a given nondecreasing CGLF, and let \u03c3 be obtained\nfrom \u03c3 by a one- or two-step addition of resource a. Then,\nthere are no profitable D-moves for any agent i \u2208 N with\n\u03c3i = \u03c3i. For an agent i \u2208 N with \u03c3i = \u03c3i, the only possible\nprofitable D-move (if exists) is to drop the added resource a.\nWe are now ready to prove our main result - Theorem\n2. Let us briefly describe the idea behind the proof. By\nLemma 4, it suffices to prove the existence of a strategy\nprofile which is A-, D- and S-stable. We start with the set\nof even and DS-stable strategy profiles which is obviously\nnot empty. In this set, we consider the subset of strategy\nprofiles with maximum congestion and maximum sum of the\nagents\" utilities. Assuming on the contrary that every\nDSstable profile admits a profitable A-move, we show the\nexistence of a strategy profile x in the above subset, such that a\n(one-step) addition of some resource a to x results in a\nDSstable strategy. Then by a finite series of one- or two-step\naddition operations we obtain an even, DS-stable strategy\nprofile with strictly higher congestion on the resources,\ncontradicting the choice of x. The full proof is presented below.\nProof of Theorem 2: Let \u03a31\n\u2286 \u03a30\nbe the subset of\nall even, DS-stable strategy profiles. Observe that since\n(\u2205, . . . , \u2205) is an even, DS-stable strategy profile, then \u03a31\nis not empty, and min\u03c3\u2208\u03a30\n\u02db\n\u02db{e \u2208 M : e is \u03c3\u2212heavy}\n\u02db\n\u02db = 0.\nThen, \u03a31\ncould also be defined as\n\u03a31\n= arg min\n\u03c3\u2208\u03a30\n\u02db\n\u02db{e \u2208 M : e is \u03c3\u2212heavy}\n\u02db\n\u02db ,\nwith h\u03c3\nbeing the common congestion.\nNow, let \u03a32\n\u2286 \u03a31\nbe the subset of \u03a31\nconsisting of all\nthose profiles with maximum congestion on the resources.\nThat is,\n\u03a32\n= arg max\n\u03c3\u2208\u03a31\nh\u03c3\n.\nLet UN (\u03c3) =\nP\ni\u2208N Ui(\u03c3) denotes the group utility of the\nagents, and let \u03a33\n\u2286 \u03a32\nbe the subset of all profiles in \u03a32\nwith maximum group utility. That is,\n\u03a33\n= arg max\n\u03c3\u2208\u03a32\nX\ni\u2208N\nUi(\u03c3) = arg max\n\u03c3\u2208\u03a32\nUN (\u03c3) .\nConsider first the simple case in which max\u03c3\u2208\u03a31 h\u03c3\n= 0.\nObviously, in this case, \u03a31\n= \u03a32\n= \u03a33\n= {x = (\u2205, . . . , \u2205)}.\nWe show below that by performing a finite series of\n(onestep) addition operations on x, we obtain an even,\nDSstable strategy profile y with higher congestion, that is with\nhy\n> hx\n= 0, in contradiction to x \u2208 \u03a32\n. Let z \u2208 \u03a30\nbe\na nearly-even (not necessarily even) DS-stable profile such\nthat mine\u2208M hz\ne = 0, and note that the profile x satisfies\nthe above conditions. Let N(z) be the subset of agents for\nwhich a profitable A-move exists, and let i \u2208 N(z).\nObviously, there exists a z-light resource a such that Ui(z\u2212i, zi \u222a\n{a}) > Ui(z) (otherwise, arg mine\u2208M hz\ne \u2286 zi, in\ncontradiction to mine\u2208M hz\ne = 0). Consider the strategy profile\nz = (z\u2212i, zi \u222a {a}) which is obtained from z by a (one-step)\naddition of resource a by agent i. Since z is nearly-even and\na is z-light, we can easily see that z is nearly-even. Then,\nLemma 7 implies that z is S-stable. Since i is the only agent\nusing resource a in z , by Lemma 8, no profitable D-moves\nare available. Thus, z is a DS-stable strategy profile.\nTherefore, since the number of resources is finite, there is a finite\nseries of one-step addition operations on x = (\u2205, . . . , \u2205) that\nleads to strategy profile y \u2208 \u03a31\nwith hy\n= 1 > 0 = hx\n, in\ncontradiction to x \u2208 \u03a32\n.\nWe turn now to consider the other case where max\u03c3\u2208\u03a31 h\u03c3\n\u2265 1. In this case we select from \u03a33\na strategy profile x,\nas described below, and use it to contradict our contrary\nassumption. Specifically, we show that there exists x \u2208 \u03a33\nsuch that for all j \u2208 N,\nvjf(hx\n)|xj |\u22121\n\u2265\nc(hx\n+ 1)\n1 \u2212 f(hx + 1)\n. (1)\nLet x be a strategy profile which is obtained from x by\na (one-step) addition of some resource a \u2208 M by some\nagent i \u2208 N(x) (note that x is nearly-even). Then, (1)\nis derived from and essentially equivalent to the inequality\nUj(x ) \u2265 Uj(x\u2212j, xj {a}), for all a \u2208 xj. That is, after\nperforming an A-move with a by i, there is no profitable\nD-move with a. Then, by Lemmas 7 and 8, x is DS-stable.\nFollowing the same lines as above, we construct a procedure\nthat initializes at x and achieves a strategy profile y \u2208 \u03a31\nwith hy\n> hx\n, in contradiction to x \u2208 \u03a32\n.\nNow, let us confirm the existence of x \u2208 \u03a33\nthat\nsatisfies (1). Let x \u2208 \u03a33\nand let M(x) be the subset of all\nresources for which there exists a profitable (one-step)\naddition. First, we show that (1) holds for all j \u2208 N such that\nxj \u2229M(x) = \u2205, that is, for all those agents with one of their\nresources being desired by another agent.\nLet a \u2208 M(x), and let x be the strategy profile that is\nobtained from x by the (one-step) addition of a by agent i.\nAssume on the contrary that there is an agent j with a \u2208 xj\nsuch that\nvjf(hx\n)|xj |\u22121\n<\nc(hx\n+ 1)\n1 \u2212 f(hx + 1)\n.\nLet x = (x\u2212j, xj {a}). Below we demonstrate that x\nis a DS-stable strategy profile and, since x and x\ncorrespond to the same congestion vector, we conclude that x\nlies in \u03a32\n. In addition, we show that UN (x ) > UN (x),\ncontradicting the fact that x \u2208 \u03a33\n.\nTo show that x \u2208 \u03a30\nwe note that x is an even strategy\nprofile, and thus no S-moves may be performed for x . In\naddition, since hx\n= hx\nand x \u2208 \u03a30\n, there are no profitable\nD-moves for any agent k = i, j. It remains to show that\nthere are no profitable D-moves for agents i and j as well.\n215\nSince Ui(x ) > Ui(x), we get\nvif(hx\n)|xi|\n>\nc(hx\n+ 1)\n1 \u2212 f(hx + 1)\n\u21d2 vif(hx\n)|xi |\u22121\n= vif(hx\n)|xi|\n>\nc(hx\n+ 1)\n1 \u2212 f(hx + 1)\n>\nc(hx\n)\n1 \u2212 f(hx)\n=\nc(hx\n)\n1 \u2212 f(hx )\n,\nwhich implies Ui(x ) > Ui(x\u2212i, xi {b}), for all b \u2208 xi .\nThus, there are no profitable D-moves for agent i. By the\nDS-stability of x, for agent j and for all b \u2208 xj, we have\nUj(x) \u2265 Uj(x\u2212j, xj {b}) \u21d2 vjf(hx\n)|xj |\u22121\n\u2265\nc(hx\n)\n1 \u2212 f(hx)\n.\nThen,\nvjf(hx\n)|xj |\u22121\n> vjf(hx\n)|xj |\n= vjf(hx\n)|xj |\u22121\n\u2265\nc(hx\n)\n1 \u2212 f(hx)\n=\nc(hx\n)\n1 \u2212 f(hx )\n\u21d2 Uj(x ) > Uj(x\u2212j, xj {b}), for all b \u2208 xi. Therefore, x\nis DS-stable and lies in \u03a32\n.\nTo show that UN (x ), the group utility of x , satisfies\nUN (x ) > UN (x), we note that hx\n= hx\n, and thus Uk(x ) =\nUk(x), for all k \u2208 N {i, j}. Therefore, we have to show\nthat Ui(x ) + Uj(x ) > Ui(x) + Uj(x), or Ui(x ) \u2212 Ui(x) >\nUj(x) \u2212 Uj(x ). Observe that\nUi(x ) > Ui(x) \u21d2 vif(hx\n)|xi|\n>\nc(hx\n+ 1)\n1 \u2212 f(hx + 1)\nand\nUj(x ) < Uj(x ) \u21d2 vjf(hx\n)|xj |\u22121\n<\nc(hx\n+ 1)\n1 \u2212 f(hx + 1)\n,\nwhich yields\nvif(hx\n)|xi|\n> vjf(hx\n)|xj |\u22121\n.\nThus, Ui(x ) \u2212 Ui(x)\n=\n\n1 \u2212 f(hx\n)|xi|+1\n\nvi \u2212 (|xi| + 1) c(hx\n)\n\u2212\nh\n1 \u2212 f(hx\n)|xi|\n\nvi \u2212 |xi|c(hx\n)\ni\n= vif(hx\n)|xi|\n(1 \u2212 f(hx\n)) \u2212 c(hx\n)\n> vjf(hx\n)|xj |\u22121\n(1 \u2212 f(hx\n)) \u2212 c(hx\n)\n=\n\n1 \u2212 f(hx\n)|xj |\n\nvj \u2212 |xj|c(hx\n)\n\u2212\nh\n1 \u2212 f(hx\n)|xj |\u22121\n\nvj \u2212 (|xi| \u2212 1) c(hx\n)\ni\n= Uj(x) \u2212 Uj(x ) .\nTherefore, x lies in \u03a32\nand satisfies UN (x ) > UN (x), in\ncontradiction to x \u2208 \u03a33\n.\nHence, if x \u2208 \u03a33\nthen (1) holds for all j \u2208 N such that\nxj \u2229M(x) = \u2205. Now let us see that there exists x \u2208 \u03a33\nsuch\nthat (1) holds for all the agents. For that, choose an agent\ni \u2208 arg mink\u2208N vif(hx\n)|xk|\n. If there exists a \u2208 xi \u2229 M(x)\nthen i satisfies (1), implying by the choice of agent i, that\nthe above obviously yields the correctness of (1) for any\nagent k \u2208 N. Otherwise, if no resource in xi lies in M(x),\nthen let a \u2208 xi and a \u2208 M(x). Since a \u2208 xi, a /\u2208 xi,\nand hx\na = hx\na , then there exists agent j such that a \u2208 xj\nand a /\u2208 xj. One can easily check that the strategy\nprofile x =\n`\nx\u2212{i,j}, (xi {a}) \u222a {a }, (xj {a }) \u222a {a}\n\u00b4\nlies\nin \u03a33\n. Thus, x satisfies (1) for agent i, and therefore, for\nany agent k \u2208 N.\nNow, let x \u2208 \u03a33\nsatisfy (1). We show below that by\nperforming a finite series of one- and two-step addition\noperations on x, we can achieve a strategy profile y that lies\nin \u03a31\n, such that hy\n> hx\n, in contradiction to x \u2208 \u03a32\n. Let\nz \u2208 \u03a30\nbe a nearly-even (not necessarily even), DS-stable\nstrategy profile, such that\nvi\nY\ne\u2208zi {b}\nf(hz\ne) \u2265\nc(hz\nb + 1)\n1 \u2212 f(hz\nb + 1)\n, (2)\nfor all i \u2208 N and for all z-light resource b \u2208 zi. We note that\nfor profile x \u2208 \u03a33\n\u2286 \u03a31\n, with all resources being x-light,\nconditions (2) and (1) are equivalent. Let z be obtained\nfrom z by a one- or two-step addition of a z-light resource\na. Obviously, z is nearly-even. In addition, hz\ne \u2265 hz\ne for\nall e \u2208 M, and mine\u2208M hz\ne \u2265 mine\u2208M hz\ne. To complete the\nproof we need to show that z is DS-stable, and, in addition,\nthat if mine\u2208M hz\ne = mine\u2208M hz\ne then z has property (2).\nThe DS-stability of z follows directly from Lemmas 7 and 8,\nand from (2) with respect to z. It remains to prove property\n(2) for z with mine\u2208M hz\ne = mine\u2208M hz\ne. Using (2) with\nrespect to z, for any agent k with zk = zk and for any\nzlight resource b \u2208 zk, we get\nvk\nY\ne\u2208zk\n{b}\nf(hz\ne ) \u2265 vk\nY\ne\u2208zk {b}\nf(hz\ne)\n\u2265\nc(hz\nb + 1)\n1 \u2212 f(hz\nb + 1)\n=\nc(hz\nb + 1)\n1 \u2212 f(hz\nb + 1)\n,\nas required. Now let us consider the rest of the agents.\nAssume z is obtained by the one-step addition of a by agent\ni. In this case, i is the only agent with zi = zi. The required\nproperty for agent i follows directly from Ui(z ) > Ui(z). In\nthe case of a two-step addition, let z =\n`\nz\u2212{i,j}, zi \u222a {b},\n(zj {b}) \u222a {a}), where b is a z-heavy resource. For agent\ni, from Ui(z\u2212i, zi \u222a {b}) > Ui(z) we get\n1 \u2212\nY\ne\u2208zi\nf(hz\ne)f(hz\nb + 1)\n!\nvi \u2212\nX\ne\u2208zi\nc(hz\ne) \u2212 c(hz\nb + 1)\n> 1 \u2212\nY\ne\u2208zi\nf(hz\ne)\n!\nvi \u2212\nX\ne\u2208zi\nc(hz\ne)\n\u21d2 vi\nY\ne\u2208zi\nf(hz\ne) >\nc(hz\nb + 1)\n1 \u2212 f(hz\nb + 1)\n, (3)\nand note that since hz\nb \u2265 hz\ne for all e \u2208 M and, in\nparticular, for all z -light resources, then\nc(hz\nb + 1)\n1 \u2212 f(hz\nb + 1)\n\u2265\nc(hz\ne + 1)\n1 \u2212 f(hz\ne + 1)\n, (4)\nfor any z -light resource e .\n216\nNow, since hz\ne \u2265 hz\ne for all e \u2208 M and b is z-heavy, then\nvi\nY\ne\u2208zi {e }\nf(hz\ne ) \u2265 vi\nY\ne\u2208zi {e }\nf(hz\ne)\n= vi\nY\ne\u2208(zi\u222a{b}) {e }\nf(hz\ne) \u2265 vi\nY\ne\u2208zi\nf(hz\ne) ,\nfor any z -light resource e . The above, coupled with (3)\nand (4), yields the required. For agent j we just use (2)\nwith respect to z and the equality hz\nb = hz\na . For any z -light\nresource e ,\nvj\nY\ne\u2208zj {e }\nf(hz\ne ) \u2265 vi\nY\ne\u2208zi {e }\nf(hz\ne)\n\u2265\nc(hz\ne + 1)\n1 \u2212 f(hz\ne + 1)\n=\nc(hz\ne + 1)\n1 \u2212 f(hz\ne + 1)\n.\nThus, since the number of resources is finite, there is a finite\nseries of one- and two-step addition operations on x that\nleads to strategy profile y \u2208 \u03a31\nwith hy\n> hx\n, in\ncontradiction to x \u2208 \u03a32\n. This completes the proof.\n4. DISCUSSION\nIn this paper, we introduce and investigate congestion\nsettings with unreliable resources, in which the probability of a\nresource\"s failure depends on the congestion experienced by\nthis resource. We defined a class of congestion games with\nload-dependent failures (CGLFs), which generalizes the\nwellknown class of congestion games. We study the existence of\npure strategy Nash equilibria and potential functions in the\npresented class of games. We show that these games do not,\nin general, possess pure strategy equilibria. Nevertheless,\nif the resource cost functions are nondecreasing then such\nequilibria are guaranteed to exist, despite the non-existence\nof a potential function.\nThe CGLF-model can be modified to the case where the\nagents pay only for non-faulty resources they selected. Both\nthe model discussed in this paper and the modified one are\nreasonable. In the full version we will show that the\nmodified model leads to similar results. In particular, we can\nshow the existence of a pure strategy equilibrium for\nnondecreasing CGLFs also in the modified model.\nIn future research we plan to consider various extensions\nof CGLFs. In particular, we plan to consider CGLFs where\nthe resources may have different costs and failure\nprobabilities, as well as CGLFs in which the resource failure\nprobabilities are mutually dependent. In addition, it is of\ninterest to develop an efficient algorithm for the computation of\npure strategy Nash equilibrium, as well as discuss the social\n(in)efficiency of the equilibria.\n5. REFERENCES\n[1] H. Ackermann, H. R\u00a8oglin, and B. V\u00a8ocking. Pure nash\nequilibria in player-specific and weighted congestion\ngames. In WINE-06, 2006.\n[2] G. Christodoulou and E. Koutsoupias. The price of\nanarchy of finite congestion games. In Proceedings of\nthe 37th Annual ACM Symposium on Theory and\nComputing (STOC-05), 2005.\n[3] A. Fabrikant, C. Papadimitriou, and K. Talwar. The\ncomplexity of pure nash equilibria. In STOC-04, pages\n604-612, 2004.\n[4] E. Koutsoupias and C. Papadimitriou. Worst-case\nequilibria. In Proceedings of the 16th Annual\nSymposium on Theoretical Aspects of Computer\nScience, pages 404-413, 1999.\n[5] K. Leyton-Brown and M. Tennenholtz. Local-effect\ngames. In IJCAI-03, 2003.\n[6] I. Milchtaich. Congestion games with player-specific\npayoff functions. Games and Economic Behavior,\n13:111-124, 1996.\n[7] D. Monderer. Solution-based congestion games.\nAdvances in Mathematical Economics, 8:397-407,\n2006.\n[8] D. Monderer. Multipotential games. In IJCAI-07,\n2007.\n[9] D. Monderer and L. Shapley. Potential games. Games\nand Economic Behavior, 14:124-143, 1996.\n[10] M. Penn, M. Polukarov, and M. Tennenholtz.\nCongestion games with failures. In Proceedings of the\n6th ACM Conference on Electronic Commerce\n(EC-05), pages 259-268, 2005.\n[11] R. Rosenthal. A class of games possessing\npure-strategy nash equilibria. International Journal of\nGame Theory, 2:65-67, 1973.\n[12] T. Roughgarden and E. Tardos. How bad is selfish\nrouting. Journal of the ACM, 49(2):236-259, 2002.\n217", "keywords": "potential function;nash equilibrium;nondecreasing cost function;resource cost function;pure strategy nash equilibrium;load-dependent failure;load-dependent resource failure;identical resource;real-valued function;failure probability;localeffect game;congestion game"} {"name": "train_C-58", "title": "A Scalable Distributed Information Management System\u2217", "abstract": "We present a Scalable Distributed Information Management System (SDIMS) that aggregates information about large-scale networked systems and that can serve as a basic building block for a broad range of large-scale distributed applications by providing detailed views of nearby information and summary views of global information. To serve as a basic building block, a SDIMS should have four properties: scalability to many nodes and attributes, flexibility to accommodate a broad range of applications, administrative isolation for security and availability, and robustness to node and network failures. We design, implement and evaluate a SDIMS that (1) leverages Distributed Hash Tables (DHT) to create scalable aggregation trees, (2) provides flexibility through a simple API that lets applications control propagation of reads and writes, (3) provides administrative isolation through simple extensions to current DHT algorithms, and (4) achieves robustness to node and network reconfigurations through lazy reaggregation, on-demand reaggregation, and tunable spatial replication. Through extensive simulations and micro-benchmark experiments, we observe that our system is an order of magnitude more scalable than existing approaches, achieves isolation properties at the cost of modestly increased read latency in comparison to flat DHTs, and gracefully handles failures.", "fulltext": "1. INTRODUCTION\nThe goal of this research is to design and build a Scalable\nDistributed Information Management System (SDIMS) that aggregates\ninformation about large-scale networked systems and that can serve\nas a basic building block for a broad range of large-scale distributed\napplications. Monitoring, querying, and reacting to changes in\nthe state of a distributed system are core components of\napplications such as system management [15, 31, 37, 42], service\nplacement [14, 43], data sharing and caching [18, 29, 32, 35, 46], sensor\nmonitoring and control [20, 21], multicast tree formation [8, 9, 33,\n36, 38], and naming and request routing [10, 11]. We therefore\nspeculate that a SDIMS in a networked system would provide a\ndistributed operating systems backbone and facilitate the\ndevelopment and deployment of new distributed services.\nFor a large scale information system, hierarchical aggregation\nis a fundamental abstraction for scalability. Rather than expose all\ninformation to all nodes, hierarchical aggregation allows a node to\naccess detailed views of nearby information and summary views of\nglobal information. In a SDIMS based on hierarchical aggregation,\ndifferent nodes can therefore receive different answers to the query\nfind a [nearby] node with at least 1 GB of free memory or find\na [nearby] copy of file foo. A hierarchical system that aggregates\ninformation through reduction trees [21, 38] allows nodes to access\ninformation they care about while maintaining system scalability.\nTo be used as a basic building block, a SDIMS should have\nfour properties. First, the system should be scalable: it should\naccommodate large numbers of participating nodes, and it should\nallow applications to install and monitor large numbers of data\nattributes. Enterprise and global scale systems today might have tens\nof thousands to millions of nodes and these numbers will increase\nover time. Similarly, we hope to support many applications, and\neach application may track several attributes (e.g., the load and\nfree memory of a system\"s machines) or millions of attributes (e.g.,\nwhich files are stored on which machines).\nSecond, the system should have flexibility to accommodate a\nbroad range of applications and attributes. For example,\nreaddominated attributes like numCPUs rarely change in value, while\nwrite-dominated attributes like numProcesses change quite often.\nAn approach tuned for read-dominated attributes will consume high\nbandwidth when applied to write-dominated attributes. Conversely,\nan approach tuned for write-dominated attributes will suffer from\nunnecessary query latency or imprecision for read-dominated\nattributes. Therefore, a SDIMS should provide mechanisms to handle\ndifferent types of attributes and leave the policy decision of tuning\nreplication to the applications.\nThird, a SDIMS should provide administrative isolation. In a\nlarge system, it is natural to arrange nodes in an organizational or\nan administrative hierarchy. A SDIMS should support\nadministraSession 10: Distributed Information Systems\n379\ntive isolation in which queries about an administrative domain\"s\ninformation can be satisfied within the domain so that the system can\noperate during disconnections from other domains, so that an\nexternal observer cannot monitor or affect intra-domain queries, and\nto support domain-scoped queries efficiently.\nFourth, the system must be robust to node failures and\ndisconnections. A SDIMS should adapt to reconfigurations in a timely\nfashion and should also provide mechanisms so that applications\ncan tradeoff the cost of adaptation with the consistency level in the\naggregated results when reconfigurations occur.\nWe draw inspiration from two previous works: Astrolabe [38]\nand Distributed Hash Tables (DHTs).\nAstrolabe [38] is a robust information management system.\nAstrolabe provides the abstraction of a single logical aggregation tree\nthat mirrors a system\"s administrative hierarchy. It provides a\ngeneral interface for installing new aggregation functions and provides\neventual consistency on its data. Astrolabe is robust due to its use\nof an unstructured gossip protocol for disseminating information\nand its strategy of replicating all aggregated attribute values for a\nsubtree to all nodes in the subtree. This combination allows any\ncommunication pattern to yield eventual consistency and allows\nany node to answer any query using local information. This high\ndegree of replication, however, may limit the system\"s ability to\naccommodate large numbers of attributes. Also, although the\napproach works well for read-dominated attributes, an update at one\nnode can eventually affect the state at all nodes, which may limit\nthe system\"s flexibility to support write-dominated attributes.\nRecent research in peer-to-peer structured networks resulted in\nDistributed Hash Tables (DHTs) [18, 28, 29, 32, 35, 46]-a data\nstructure that scales with the number of nodes and that distributes\nthe read-write load for different queries among the participating\nnodes. It is interesting to note that although these systems export\na global hash table abstraction, many of them internally make use\nof what can be viewed as a scalable system of aggregation trees\nto, for example, route a request for a given key to the right DHT\nnode. Indeed, rather than export a general DHT interface, Plaxton\net al.\"s [28] original application makes use of hierarchical\naggregation to allow nodes to locate nearby copies of objects. It seems\nappealing to develop a SDIMS abstraction that exposes this internal\nfunctionality in a general way so that scalable trees for aggregation\ncan be a basic system building block alongside the DHTs.\nAt a first glance, it might appear to be obvious that simply\nfusing DHTs with Astrolabe\"s aggregation abstraction will result in a\nSDIMS. However, meeting the SDIMS requirements forces a\ndesign to address four questions: (1) How to scalably map different\nattributes to different aggregation trees in a DHT mesh? (2) How to\nprovide flexibility in the aggregation to accommodate different\napplication requirements? (3) How to adapt a global, flat DHT mesh\nto attain administrative isolation property? and (4) How to provide\nrobustness without unstructured gossip and total replication?\nThe key contributions of this paper that form the foundation of\nour SDIMS design are as follows.\n1. We define a new aggregation abstraction that specifies both\nattribute type and attribute name and that associates an\naggregation function with a particular attribute type. This\nabstraction paves the way for utilizing the DHT system\"s internal\ntrees for aggregation and for achieving scalability with both\nnodes and attributes.\n2. We provide a flexible API that lets applications control the\npropagation of reads and writes and thus trade off update\ncost, read latency, replication, and staleness.\n3. We augment an existing DHT algorithm to ensure path\nconvergence and path locality properties in order to achieve\nadministrative isolation.\n4. We provide robustness to node and network reconfigurations\nby (a) providing temporal replication through lazy\nreaggregation that guarantees eventual consistency and (b)\nensuring that our flexible API allows demanding applications gain\nadditional robustness by using tunable spatial replication of\ndata aggregates or by performing fast on-demand\nreaggregation to augment the underlying lazy reaggregation or by\ndoing both.\nWe have built a prototype of SDIMS. Through simulations and\nmicro-benchmark experiments on a number of department machines\nand PlanetLab [27] nodes, we observe that the prototype achieves\nscalability with respect to both nodes and attributes through use\nof its flexible API, inflicts an order of magnitude lower maximum\nnode stress than unstructured gossiping schemes, achieves isolation\nproperties at a cost of modestly increased read latency compared to\nflat DHTs, and gracefully handles node failures.\nThis initial study discusses key aspects of an ongoing system\nbuilding effort, but it does not address all issues in building a SDIMS.\nFor example, we believe that our strategies for providing robustness\nwill mesh well with techniques such as supernodes [22] and other\nongoing efforts to improve DHTs [30] for further improving\nrobustness. Also, although splitting aggregation among many trees\nimproves scalability for simple queries, this approach may make\ncomplex and multi-attribute queries more expensive compared to\na single tree. Additional work is needed to understand the\nsignificance of this limitation for real workloads and, if necessary, to\nadapt query planning techniques from DHT abstractions [16, 19]\nto scalable aggregation tree abstractions.\nIn Section 2, we explain the hierarchical aggregation\nabstraction that SDIMS provides to applications. In Sections 3 and 4, we\ndescribe the design of our system for achieving the flexibility,\nscalability, and administrative isolation requirements of a SDIMS. In\nSection 5, we detail the implementation of our prototype system.\nSection 6 addresses the issue of adaptation to the topological\nreconfigurations. In Section 7, we present the evaluation of our\nsystem through large-scale simulations and microbenchmarks on real\nnetworks. Section 8 details the related work, and Section 9\nsummarizes our contribution.\n2. AGGREGATION ABSTRACTION\nAggregation is a natural abstraction for a large-scale distributed\ninformation system because aggregation provides scalability by\nallowing a node to view detailed information about the state near it\nand progressively coarser-grained summaries about progressively\nlarger subsets of a system\"s data [38].\nOur aggregation abstraction is defined across a tree spanning all\nnodes in the system. Each physical node in the system is a leaf and\neach subtree represents a logical group of nodes. Note that logical\ngroups can correspond to administrative domains (e.g., department\nor university) or groups of nodes within a domain (e.g., 10\nworkstations on a LAN in CS department). An internal non-leaf node,\nwhich we call virtual node, is simulated by one or more physical\nnodes at the leaves of the subtree for which the virtual node is the\nroot. We describe how to form such trees in a later section.\nEach physical node has local data stored as a set of (attributeType,\nattributeName, value) tuples such as (configuration, numCPUs,\n16), (mcast membership, session foo, yes), or (file stored, foo,\nmyIPaddress). The system associates an aggregation function ftype\nwith each attribute type, and for each level-i subtree Ti in the\nsystem, the system defines an aggregate value Vi,type,name for each\n(at380\ntributeType, attributeName) pair as follows. For a (physical) leaf\nnode T0 at level 0, V0,type,name is the locally stored value for the\nattribute type and name or NULL if no matching tuple exists. Then\nthe aggregate value for a level-i subtree Ti is the aggregation\nfunction for the type, ftype computed across the aggregate values of\neach of Ti\"s k children:\nVi,type,name = ftype(V0\ni\u22121,type,name,V1\ni\u22121,type,name,...,Vk\u22121\ni\u22121,type,name).\nAlthough SDIMS allows arbitrary aggregation functions, it is\noften desirable that these functions satisfy the hierarchical\ncomputation property [21]: f(v1,...,vn)= f(f(v1,...,vs1 ), f(vs1+1,...,vs2 ),\n..., f(vsk+1,...,vn)), where vi is the value of an attribute at node\ni. For example, the average operation, defined as avg(v1,...,vn) =\n1/n.\u2211n\ni=0 vi, does not satisfy the property. Instead, if an attribute\nstores values as tuples (sum,count), the attribute satisfies the\nhierarchical computation property while still allowing the applications\nto compute the average from the aggregate sum and count values.\nFinally, note that for a large-scale system, it is difficult or\nimpossible to insist that the aggregation value returned by a probe\ncorresponds to the function computed over the current values at the\nleaves at the instant of the probe. Therefore our system provides\nonly weak consistency guarantees - specifically eventual\nconsistency as defined in [38].\n3. FLEXIBILITY\nA major innovation of our work is enabling flexible aggregate\ncomputation and propagation. The definition of the aggregation\nabstraction allows considerable flexibility in how, when, and where\naggregate values are computed and propagated. While previous\nsystems [15, 29, 38, 32, 35, 46] implement a single static strategy,\nwe argue that a SDIMS should provide flexible computation and\npropagation to efficiently support wide variety of applications with\ndiverse requirements. In order to provide this flexibility, we\ndevelop a simple interface that decomposes the aggregation\nabstraction into three pieces of functionality: install, update, and probe.\nThis definition of the aggregation abstraction allows our system\nto provide a continuous spectrum of strategies ranging from lazy\naggregate computation and propagation on reads to aggressive\nimmediate computation and propagation on writes. In Figure 1, we\nillustrate both extreme strategies and an intermediate strategy.\nUnder the lazy Update-Local computation and propagation strategy,\nan update (or write) only affects local state. Then, a probe (or read)\nthat reads a level-i aggregate value is sent up the tree to the issuing\nnode\"s level-i ancestor and then down the tree to the leaves. The\nsystem then computes the desired aggregate value at each layer up\nthe tree until the level-i ancestor that holds the desired value.\nFinally, the level-i ancestor sends the result down the tree to the\nissuing node. In the other extreme case of the aggressive Update-All\nimmediate computation and propagation on writes [38], when an\nupdate occurs, changes are aggregated up the tree, and each new\naggregate value is flooded to all of a node\"s descendants. In this\ncase, each level-i node not only maintains the aggregate values for\nthe level-i subtree but also receives and locally stores copies of all\nof its ancestors\" level- j ( j > i) aggregation values. Also, a leaf\nsatisfies a probe for a level-i aggregate using purely local data. In an\nintermediate Update-Up strategy, the root of each subtree maintains\nthe subtree\"s current aggregate value, and when an update occurs,\nthe leaf node updates its local state and passes the update to its\nparent, and then each successive enclosing subtree updates its\naggregate value and passes the new value to its parent. This strategy\nsatisfies a leaf\"s probe for a level-i aggregate value by sending the\nprobe up to the level-i ancestor of the leaf and then sending the\naggregate value down to the leaf. Finally, notice that other strategies\nexist. In general, an Update-Upk-Downj strategy aggregates up to\nparameter description optional\nattrType Attribute Type\naggrfunc Aggregation Function\nup How far upward each update is\nsent (default: all)\nX\ndown How far downward each\naggregate is sent (default: none)\nX\ndomain Domain restriction (default: none) X\nexpTime Expiry Time\nTable 1: Arguments for the install operation\nthe kth level and propagates the aggregate values of a node at level\nl (s.t. l \u2264 k) downward for j levels.\nA SDIMS must provide a wide range of flexible computation and\npropagation strategies to applications for it to be a general\nabstraction. An application should be able to choose a particular\nmechanism based on its read-to-write ratio that reduces the bandwidth\nconsumption while attaining the required responsiveness and\nprecision. Note that the read-to-write ratio of the attributes that\napplications install vary extensively. For example, a read-dominated\nattribute like numCPUs rarely changes in value, while a\nwritedominated attribute like numProcesses changes quite often. An\naggregation strategy like Update-All works well for read-dominated\nattributes but suffers high bandwidth consumption when applied for\nwrite-dominated attributes. Conversely, an approach like\nUpdateLocal works well for write-dominated attributes but suffers from\nunnecessary query latency or imprecision for read-dominated\nattributes.\nSDIMS also allows non-uniform computation and propagation\nacross the aggregation tree with different up and down parameters\nin different subtrees so that applications can adapt with the\nspatial and temporal heterogeneity of read and write operations. With\nrespect to spatial heterogeneity, access patterns may differ for\ndifferent parts of the tree, requiring different propagation strategies\nfor different parts of the tree. Similarly with respect to temporal\nheterogeneity, access patterns may change over time requiring\ndifferent strategies over time.\n3.1 Aggregation API\nWe provide the flexibility described above by splitting the\naggregation API into three functions: Install() installs an aggregation\nfunction that defines an operation on an attribute type and\nspecifies the update strategy that the function will use, Update() inserts\nor modifies a node\"s local value for an attribute, and Probe()\nobtains an aggregate value for a specified subtree. The install\ninterface allows applications to specify the k and j parameters of the\nUpdate-Upk-Downj strategy along with the aggregation function.\nThe update interface invokes the aggregation of an attribute on the\ntree according to corresponding aggregation function\"s aggregation\nstrategy. The probe interface not only allows applications to obtain\nthe aggregated value for a specified tree but also allows a probing\nnode to continuously fetch the values for a specified time, thus\nenabling an application to adapt to spatial and temporal heterogeneity.\nThe rest of the section describes these three interfaces in detail.\n3.1.1 Install\nThe Install operation installs an aggregation function in the\nsystem. The arguments for this operation are listed in Table 1. The\nattrType argument denotes the type of attributes on which this\naggregation function is invoked. Installed functions are soft state that\nmust be periodically renewed or they will be garbage collected at\nexpTime.\nThe arguments up and down specify the aggregate computation\n381\nUpdate Strategy On Update On Probe for Global Aggregate Value On Probe for Level-1 Aggregate Value\nUpdate-Local\nUpdate-Up\nUpdate-All\nFigure 1: Flexible API\nparameter description optional\nattrType Attribute Type\nattrName Attribute Name\nmode Continuous or One-shot (default:\none-shot)\nX\nlevel Level at which aggregate is sought\n(default: at all levels)\nX\nup How far up to go and re-fetch the\nvalue (default: none)\nX\ndown How far down to go and\nreaggregate (default: none)\nX\nexpTime Expiry Time\nTable 2: Arguments for the probe operation\nand propagation strategy Update-Upk-Downj. The domain\nargument, if present, indicates that the aggregation function should be\ninstalled on all nodes in the specified domain; otherwise the\nfunction is installed on all nodes in the system.\n3.1.2 Update\nThe Update operation takes three arguments attrType, attrName,\nand value and creates a new (attrType, attrName, value) tuple or\nupdates the value of an old tuple with matching attrType and\nattrName at a leaf node.\nThe update interface meshes with installed aggregate\ncomputation and propagation strategy to provide flexibility. In particular,\nas outlined above and described in detail in Section 5, after a leaf\napplies an update locally, the update may trigger re-computation\nof aggregate values up the tree and may also trigger propagation\nof changed aggregate values down the tree. Notice that our\nabstraction associates an aggregation function with only an attrType\nbut lets updates specify an attrName along with the attrType. This\ntechnique helps achieve scalability with respect to nodes and\nattributes as described in Section 4.\n3.1.3 Probe\nThe Probe operation returns the value of an attribute to an\napplication. The complete argument set for the probe operation is shown\nin Table 2. Along with the attrName and the attrType arguments, a\nlevel argument specifies the level at which the answers are required\nfor an attribute. In our implementation we choose to return results\nat all levels k < l for a level-l probe because (i) it is inexpensive as\nthe nodes traversed for level-l probe also contain level k aggregates\nfor k < l and as we expect the network cost of transmitting the\nadditional information to be small for the small aggregates which we\nfocus and (ii) it is useful as applications can efficiently get several\naggregates with a single probe (e.g., for domain-scoped queries as\nexplained in Section 4.2).\nProbes with mode set to continuous and with finite expTime\nenable applications to handle spatial and temporal heterogeneity. When\nnode A issues a continuous probe at level l for an attribute, then\nregardless of the up and down parameters, updates for the attribute\nat any node in A\"s level-l ancestor\"s subtree are aggregated up to\nlevel l and the aggregated value is propagated down along the path\nfrom the ancestor to A. Note that continuous mode enables SDIMS\nto support a distributed sensor-actuator mechanism where a\nsensor monitors a level-i aggregate with a continuous mode probe and\ntriggers an actuator upon receiving new values for the probe.\nThe up and down arguments enable applications to perform\nondemand fast re-aggregation during reconfigurations, where a forced\nre-aggregation is done for the corresponding levels even if the\naggregated value is available, as we discuss in Section 6. When\npresent, the up and down arguments are interpreted as described\nin the install operation.\n3.1.4 Dynamic Adaptation\nAt the API level, the up and down arguments in install API can be\nregarded as hints, since they suggest a computation strategy but do\nnot affect the semantics of an aggregation function. A SDIMS\nimplementation can dynamically adjust its up/down strategies for an\nattribute based on its measured read/write frequency. But a virtual\nintermediate node needs to know the current up and down\npropagation values to decide if the local aggregate is fresh in order to\nanswer a probe. This is the key reason why up and down need to be\nstatically defined at the install time and can not be specified in the\nupdate operation. In dynamic adaptation, we implement a\nleasebased mechanism where a node issues a lease to a parent or a child\ndenoting that it will keep propagating the updates to that parent or\nchild. We are currently evaluating different policies to decide when\nto issue a lease and when to revoke a lease.\n4. SCALABILITY\nOur design achieves scalability with respect to both nodes and\nattributes through two key ideas. First, it carefully defines the\naggregation abstraction to mesh well with its underlying scalable DHT\nsystem. Second, it refines the basic DHT abstraction to form an\nAutonomous DHT (ADHT) to achieve the administrative isolation\nproperties that are crucial to scaling for large real-world systems.\nIn this section, we describe these two ideas in detail.\n4.1 Leveraging DHTs\nIn contrast to previous systems [4, 15, 38, 39, 45], SDIMS\"s\naggregation abstraction specifies both an attribute type and attribute\nname and associates an aggregation function with a type rather than\njust specifying and associating a function with a name. Installing a\nsingle function that can operate on many different named attributes\nmatching a type improves scalability for sparse attribute types\nwith large, sparsely-filled name spaces. For example, to construct\na file location service, our interface allows us to install a single\nfunction that computes an aggregate value for any named file. A\nsubtree\"s aggregate value for (FILELOC, name) would be the ID of\na node in the subtree that stores the named file. Conversely,\nAstrolabe copes with sparse attributes by having aggregation functions\ncompute sets or lists and suggests that scalability can be improved\nby representing such sets with Bloom filters [6]. Supporting sparse\nnames within a type provides at least two advantages. First, when\nthe value associated with a name is updated, only the state\nassoci382\n001 010100\n000\n011 101\n111\n110\n011 111 001 101 000 100 110010\nL0\nL1\nL2\nL3\nFigure 2: The DHT tree corresponding to key 111 (DHTtree111)\nand the corresponding aggregation tree.\nated with that name needs to be updated and propagated to other\nnodes. Second, splitting values associated with different names\ninto different aggregation values allows our system to leverage\nDistributed Hash Tables (DHTs) to map different names to different\ntrees and thereby spread the function\"s logical root node\"s load and\nstate across multiple physical nodes.\nGiven this abstraction, scalably mapping attributes to DHTs is\nstraightforward. DHT systems assign a long, random ID to each\nnode and define an algorithm to route a request for key k to a\nnode rootk such that the union of paths from all nodes forms a tree\nDHTtreek rooted at the node rootk. Now, as illustrated in Figure 2,\nby aggregating an attribute along the aggregation tree\ncorresponding to DHTtreek for k =hash(attribute type, attribute name),\ndifferent attributes will be aggregated along different trees.\nIn comparison to a scheme where all attributes are aggregated\nalong a single tree, aggregating along multiple trees incurs lower\nmaximum node stress: whereas in a single aggregation tree\napproach, the root and the intermediate nodes pass around more\nmessages than leaf nodes, in a DHT-based multi-tree, each node acts as\nan intermediate aggregation point for some attributes and as a leaf\nnode for other attributes. Hence, this approach distributes the onus\nof aggregation across all nodes.\n4.2 Administrative Isolation\nAggregation trees should provide administrative isolation by\nensuring that for each domain, the virtual node at the root of the\nsmallest aggregation subtree containing all nodes of that domain is\nhosted by a node in that domain. Administrative isolation is\nimportant for three reasons: (i) for security - so that updates and probes\nflowing in a domain are not accessible outside the domain, (ii) for\navailability - so that queries for values in a domain are not affected\nby failures of nodes in other domains, and (iii) for efficiency - so\nthat domain-scoped queries can be simple and efficient.\nTo provide administrative isolation to aggregation trees, a DHT\nshould satisfy two properties:\n1. Path Locality: Search paths should always be contained in\nthe smallest possible domain.\n2. Path Convergence: Search paths for a key from different\nnodes in a domain should converge at a node in that domain.\nExisting DHTs support path locality [18] or can easily support it\nby using the domain nearness as the distance metric [7, 17], but they\ndo not guarantee path convergence as those systems try to optimize\nthe search path to the root to reduce response latency. For example,\nPastry [32] uses prefix routing in which each node\"s routing table\ncontains one row per hexadecimal digit in the nodeId space where\nthe ith row contains a list of nodes whose nodeIds differ from the\ncurrent node\"s nodeId in the ith digit with one entry for each\npossible digit value. Given a routing topology, to route a packet to\nan arbitrary destination key, a node in Pastry forwards a packet to\nthe node with a nodeId prefix matching the key in at least one more\ndigit than the current node. If such a node is not known, the\ncurrent node uses an additional data structure, the leaf set containing\n110XX\n010XX\n011XX\n100XX\n101XX\nuniv\ndep1 dep2\nkey = 111XX\n011XX 100XX 101XX 110XX 010XX\nL1\nL0\nL2\nFigure 3: Example shows how isolation property is violated\nwith original Pastry. We also show the corresponding\naggregation tree.\n110XX\n010XX\n011XX\n100XX\n101XX\nuniv\ndep1 dep2\nkey = 111XX\nX\n011XX 100XX 101XX 110XX 010XX\nL0\nL1\nL2\nFigure 4: Autonomous DHT satisfying the isolation property.\nAlso the corresponding aggregation tree is shown.\nL immediate higher and lower neighbors in the nodeId space, and\nforwards the packet to a node with an identical prefix but that is\nnumerically closer to the destination key in the nodeId space. This\nprocess continues until the destination node appears in the leaf set,\nafter which the message is routed directly. Pastry\"s expected\nnumber of routing steps is logn, where n is the number of nodes, but\nas Figure 3 illustrates, this algorithm does not guarantee path\nconvergence: if two nodes in a domain have nodeIds that match a key\nin the same number of bits, both of them can route to a third node\noutside the domain when routing for that key.\nSimple modifications to Pastry\"s route table construction and\nkey-routing protocols yield an Autonomous DHT (ADHT) that\nsatisfies the path locality and path convergence properties. As Figure 4\nillustrates, whenever two nodes in a domain share the same prefix\nwith respect to a key and no other node in the domain has a longer\nprefix, our algorithm introduces a virtual node at the boundary of\nthe domain corresponding to that prefix plus the next digit of the\nkey; such a virtual node is simulated by the existing node whose id\nis numerically closest to the virtual node\"s id. Our ADHT\"s routing\ntable differs from Pastry\"s in two ways. First, each node maintains\na separate leaf set for each domain of which it is a part. Second,\nnodes use two proximity metrics when populating the routing tables\n- hierarchical domain proximity is the primary metric and network\ndistance is secondary. Then, to route a packet to a global root for a\nkey, ADHT routing algorithm uses the routing table and the leaf set\nentries to route to each successive enclosing domain\"s root (the\nvirtual or real node in the domain matching the key in the maximum\nnumber of digits). Additional details about the ADHT algorithm\nare available in an extended technical report [44].\nProperties. Maintaining a different leaf set for each\nadministrative hierarchy level increases the number of neighbors that each\nnode tracks to (2b)\u2217lgb n+c.l from (2b)\u2217lgb n+c in unmodified\nPastry, where b is the number of bits in a digit, n is the number of\nnodes, c is the leaf set size, and l is the number of domain levels.\nRouting requires O(lgbn + l) steps compared to O(lgbn) steps in\nPastry; also, each routing hop may be longer than in Pastry because\nthe modified algorithm\"s routing table prefers same-domain nodes\nover nearby nodes. We experimentally quantify the additional\nrouting costs in Section 7.\nIn a large system, the ADHT topology allows domains to\nim383\nA1 A2 B1\n((B1.B.,1),\n(B.,1),(.,1))\n((B1.B.,1),\n(B.,1),(.,1))\nL2\nL1\nL0\n((B1.B.,1),\n(B.,1),(.,3))\n((A1.A.,1),\n(A.,2),(.,2))\n((A1.A.,1),\n(A.,1),(.,1))\n((A2.A.,1),\n(A.,1),(.,1))\nFigure 5: Example for domain-scoped queries\nprove security for sensitive attribute types by installing them only\nwithin a specified domain. Then, aggregation occurs entirely within\nthe domain and a node external to the domain can neither observe\nnor affect the updates and aggregation computations of the attribute\ntype. Furthermore, though we have not implemented this feature\nin the prototype, the ADHT topology would also support\ndomainrestricted probes that could ensure that no one outside of a domain\ncan observe a probe for data stored within the domain.\nThe ADHT topology also enhances availability by allowing the\ncommon case of probes for data within a domain to depend only on\na domain\"s nodes. This, for example, allows a domain that becomes\ndisconnected from the rest of the Internet to continue to answer\nqueries for local data.\nAggregation trees that provide administrative isolation also\nenable the definition of simple and efficient domain-scoped\naggregation functions to support queries like what is the average load\non machines in domain X? For example, consider an\naggregation function to count the number of machines in an example\nsystem with three machines illustrated in Figure 5. Each leaf node\nl updates attribute NumMachines with a value vl containing a set\nof tuples of form (Domain, Count) for each domain of which the\nnode is a part. In the example, the node A1 with name A1.A.\nperforms an update with the value ((A1.A.,1),(A.,1),(.,1)). An\naggregation function at an internal virtual node hosted on node N with\nchild set C computes the aggregate as a set of tuples: for each\ndomain D that N is part of, form a tuple (D,\u2211c\u2208C(count|(D,count) \u2208\nvc)). This computation is illustrated in the Figure 5. Now a query\nfor NumMachines with level set to MAX will return the\naggregate values at each intermediate virtual node on the path to the\nroot as a set of tuples (tree level, aggregated value) from which\nit is easy to extract the count of machines at each enclosing\ndomain. For example, A1 would receive ((2, ((B1.B.,1),(B.,1),(.,3))),\n(1, ((A1.A.,1),(A.,2),(.,2))), (0, ((A1.A.,1),(A.,1),(.,1)))). Note that\nsupporting domain-scoped queries would be less convenient and\nless efficient if aggregation trees did not conform to the system\"s\nadministrative structure. It would be less efficient because each\nintermediate virtual node will have to maintain a list of all values at\nthe leaves in its subtree along with their names and it would be less\nconvenient as applications that need an aggregate for a domain will\nhave to pick values of nodes in that domain from the list returned\nby a probe and perform computation.\n5. PROTOTYPE IMPLEMENTATION\nThe internal design of our SDIMS prototype comprises of two\nlayers: the Autonomous DHT (ADHT) layer manages the overlay\ntopology of the system and the Aggregation Management Layer\n(AML) maintains attribute tuples, performs aggregations, stores\nand propagates aggregate values. Given the ADHT construction\ndescribed in Section 4.2, each node implements an Aggregation\nManagement Layer (AML) to support the flexible API described in\nSection 3. In this section, we describe the internal state and\noperation of the AML layer of a node in the system.\nlocal\nMIB\nMIBs\nancestor\nreduction MIB\n(level 1)MIBs\nancestor\nMIB from\nchild 0X...\nMIB from\nchild 0X...\nLevel 2\nLevel 1\nLevel 3\nLevel 0\n1XXX...\n10XX...\n100X...\nFrom parents0X..\nTo parent 0X...\n\u2212\u2212 aggregation functions\nFrom parents\nTo parent 10XX...\n1X..\n1X..\n1X..\nTo parent 11XX...\nNode Id: (1001XXX)\n1001X..\n100X..\n10X..\n1X..\nVirtual Node\nFigure 6: Example illustrating the data structures and the\norganization of them at a node.\nWe refer to a store of (attribute type, attribute name, value) tuples\nas a Management Information Base or MIB, following the\nterminology from Astrolabe [38] and SNMP [34]. We refer an (attribute\ntype, attribute name) tuple as an attribute key.\nAs Figure 6 illustrates, each physical node in the system acts as\nseveral virtual nodes in the AML: a node acts as leaf for all attribute\nkeys, as a level-1 subtree root for keys whose hash matches the\nnode\"s ID in b prefix bits (where b is the number of bits corrected\nin each step of the ADHT\"s routing scheme), as a level-i subtree\nroot for attribute keys whose hash matches the node\"s ID in the\ninitial i \u2217 b bits, and as the system\"s global root for attribute keys\nwhose hash matches the node\"s ID in more prefix bits than any\nother node (in case of a tie, the first non-matching bit is ignored\nand the comparison is continued [46]).\nTo support hierarchical aggregation, each virtual node at the root\nof a level-i subtree maintains several MIBs that store (1) child MIBs\ncontaining raw aggregate values gathered from children, (2) a\nreduction MIB containing locally aggregated values across this raw\ninformation, and (3) an ancestor MIB containing aggregate values\nscattered down from ancestors. This basic strategy of maintaining\nchild, reduction, and ancestor MIBs is based on Astrolabe [38],\nbut our structured propagation strategy channels information that\nflows up according to its attribute key and our flexible propagation\nstrategy only sends child updates up and ancestor aggregate results\ndown as far as specified by the attribute key\"s aggregation\nfunction. Note that in the discussion below, for ease of explanation, we\nassume that the routing protocol is correcting single bit at a time\n(b = 1). Our system, built upon Pastry, handles multi-bit correction\n(b = 4) and is a simple extension to the scheme described here.\nFor a given virtual node ni at level i, each child MIB contains the\nsubset of a child\"s reduction MIB that contains tuples that match\nni\"s node ID in i bits and whose up aggregation function attribute is\nat least i. These local copies make it easy for a node to recompute\na level-i aggregate value when one child\"s input changes. Nodes\nmaintain their child MIBs in stable storage and use a simplified\nversion of the Bayou log exchange protocol (sans conflict detection\nand resolution) for synchronization after disconnections [26].\nVirtual node ni at level i maintains a reduction MIB of tuples\nwith a tuple for each key present in any child MIB containing the\nattribute type, attribute name, and output of the attribute type\"s\naggregate functions applied to the children\"s tuples.\nA virtual node ni at level i also maintains an ancestor MIB to\nstore the tuples containing attribute key and a list of aggregate\nvalues at different levels scattered down from ancestors. Note that the\n384\nlist for a key might contain multiple aggregate values for a same\nlevel but aggregated at different nodes (see Figure 4). So, the\naggregate values are tagged not only with level information, but are\nalso tagged with ID of the node that performed the aggregation.\nLevel-0 differs slightly from other levels. Each level-0 leaf node\nmaintains a local MIB rather than maintaining child MIBs and a\nreduction MIB. This local MIB stores information about the local\nnode\"s state inserted by local applications via update() calls. We\nenvision various sensor programs and applications insert data into\nlocal MIB. For example, one program might monitor local\nconfiguration and perform updates with information such as total memory,\nfree memory, etc., A distributed file system might perform update\nfor each file stored on the local node.\nAlong with these MIBs, a virtual node maintains two other\ntables: an aggregation function table and an outstanding probes\ntable. An aggregation function table contains the aggregation\nfunction and installation arguments (see Table 1) associated with an\nattribute type or an attribute type and name. Each aggregate\nfunction is installed on all nodes in a domain\"s subtree, so the aggregate\nfunction table can be thought of as a special case of the ancestor\nMIB with domain functions always installed up to a root within a\nspecified domain and down to all nodes within the domain. The\noutstanding probes table maintains temporary information\nregarding in-progress probes.\nGiven these data structures, it is simple to support the three API\nfunctions described in Section 3.1.\nInstall The Install operation (see Table 1) installs on a domain an\naggregation function that acts on a specified attribute type.\nExecution of an install operation for function aggrFunc on attribute type\nattrType proceeds in two phases: first the install request is passed\nup the ADHT tree with the attribute key (attrType, null) until it\nreaches the root for that key within the specified domain. Then, the\nrequest is flooded down the tree and installed on all intermediate\nand leaf nodes.\nUpdate When a level i virtual node receives an update for an\nattribute from a child below: it first recomputes the level-i\naggregate value for the specified key, stores that value in its reduction\nMIB and then, subject to the function\"s up and domain parameters,\npasses the updated value to the appropriate parent based on the\nattribute key. Also, the level-i (i \u2265 1) virtual node sends the updated\nlevel-i aggregate to all its children if the function\"s down parameter\nexceeds zero. Upon receipt of a level-i aggregate from a parent,\na level k virtual node stores the value in its ancestor MIB and, if\nk \u2265 i\u2212down, forwards this aggregate to its children.\nProbe A Probe collects and returns the aggregate value for a\nspecified attribute key for a specified level of the tree. As Figure 1\nillustrates, the system satisfies a probe for a level-i aggregate value\nusing a four-phase protocol that may be short-circuited when\nupdates have previously propagated either results or partial results up\nor down the tree. In phase 1, the route probe phase, the system\nroutes the probe up the attribute key\"s tree to either the root of the\nlevel-i subtree or to a node that stores the requested value in its\nancestor MIB. In the former case, the system proceeds to phase 2 and\nin the latter it skips to phase 4. In phase 2, the probe scatter phase,\neach node that receives a probe request sends it to all of its children\nunless the node\"s reduction MIB already has a value that matches\nthe probe\"s attribute key, in which case the node initiates phase 3\non behalf of its subtree. In phase 3, the probe aggregation phase,\nwhen a node receives values for the specified key from each of its\nchildren, it executes the aggregate function on these values and\neither (a) forwards the result to its parent (if its level is less than i)\nor (b) initiates phase 4 (if it is at level i). Finally, in phase 4, the\naggregate routing phase the aggregate value is routed down to the\nnode that requested it. Note that in the extreme case of a function\ninstalled with up = down = 0, a level-i probe can touch all nodes\nin a level-i subtree while in the opposite extreme case of a\nfunction installed with up = down = ALL, probe is a completely local\noperation at a leaf.\nFor probes that include phases 2 (probe scatter) and 3 (probe\naggregation), an issue is how to decide when a node should stop\nwaiting for its children to respond and send up its current\naggregate value. A node stops waiting for its children when one of three\nconditions occurs: (1) all children have responded, (2) the ADHT\nlayer signals one or more reconfiguration events that mark all\nchildren that have not yet responded as unreachable, or (3) a watchdog\ntimer for the request fires. The last case accounts for nodes that\nparticipate in the ADHT protocol but that fail at the AML level.\nAt a virtual node, continuous probes are handled similarly as\none-shot probes except that such probes are stored in the\noutstanding probe table for a time period of expTime specified in the probe.\nThus each update for an attribute triggers re-evaluation of\ncontinuous probes for that attribute.\nWe implement a lease-based mechanism for dynamic adaptation.\nA level-l virtual node for an attribute can issue the lease for\nlevell aggregate to a parent or a child only if up is greater than l or it\nhas leases from all its children. A virtual node at level l can issue\nthe lease for level-k aggregate for k > l to a child only if down\u2265\nk \u2212l or if it has the lease for that aggregate from its parent. Now a\nprobe for level-k aggregate can be answered by level-l virtual node\nif it has a valid lease, irrespective of the up and down values. We\nare currently designing different policies to decide when to issue a\nlease and when to revoke a lease and are also evaluating them with\nthe above mechanism.\nOur current prototype does not implement access control on\ninstall, update, and probe operations but we plan to implement\nAstrolabe\"s [38] certificate-based restrictions. Also our current\nprototype does not restrict the resource consumption in executing the\naggregation functions; but, \u2018techniques from research on resource\nmanagement in server systems and operating systems [2, 3] can be\napplied here.\n6. ROBUSTNESS\nIn large scale systems, reconfigurations are common. Our two\nmain principles for robustness are to guarantee (i) read availability\n- probes complete in finite time, and (ii) eventual consistency -\nupdates by a live node will be visible to probes by connected nodes\nin finite time. During reconfigurations, a probe might return a stale\nvalue for two reasons. First, reconfigurations lead to incorrectness\nin the previous aggregate values. Second, the nodes needed for\naggregation to answer the probe become unreachable. Our\nsystem also provides two hooks that applications can use for improved\nend-to-end robustness in the presence of reconfigurations: (1)\nOndemand re-aggregation and (2) application controlled replication.\nOur system handles reconfigurations at two levels - adaptation at\nthe ADHT layer to ensure connectivity and adaptation at the AML\nlayer to ensure access to the data in SDIMS.\n6.1 ADHT Adaptation\nOur ADHT layer adaptation algorithm is same as Pastry\"s\nadaptation algorithm [32] - the leaf sets are repaired as soon as a\nreconfiguration is detected and the routing table is repaired lazily. Note\nthat maintaining extra leaf sets does not degrade the fault-tolerance\nproperty of the original Pastry; indeed, it enhances the resilience\nof ADHTs to failures by providing additional routing links. Due\nto redundancy in the leaf sets and the routing table, updates can be\nrouted towards their root nodes successfully even during failures.\n385\nReconfig\nreconfig\nnotices\nDHT\npartial\nDHT\ncomplete\nDHT\nends\nLazy\nTime\nData\n3 7 81 2 4 5 6starts\nLazy\nData\nstarts\nLazy\nData\nstarts\nLazy\nData\nrepairrepair\nreaggr reaggr reaggr reaggr\nhappens\nFigure 7: Default lazy data re-aggregation time line\nAlso note that the administrative isolation property satisfied by our\nADHT algorithm ensures that the reconfigurations in a level i\ndomain do not affect the probes for level i in a sibling domain.\n6.2 AML Adaptation\nBroadly, we use two types of strategies for AML adaptation in\nthe face of reconfigurations: (1) Replication in time as a\nfundamental baseline strategy, and (2) Replication in space as an\nadditional performance optimization that falls back on replication in\ntime when the system runs out of replicas. We provide two\nmechanisms for replication in time. First, lazy re-aggregation propagates\nalready received updates to new children or new parents in a lazy\nfashion over time. Second, applications can reduce the probability\nof probe response staleness during such repairs through our flexible\nAPI with appropriate setting of the down parameter.\nLazy Re-aggregation: The DHT layer informs the AML layer\nabout reconfigurations in the network using the following three\nfunction calls - newParent, failedChild, and newChild. On\nnewParent(parent, prefix), all probes in the outstanding-probes table\ncorresponding to prefix are re-evaluated. If parent is not null, then\naggregation functions and already existing data are lazily transferred\nin the background. Any new updates, installs, and probes for this\nprefix are sent to the parent immediately. On failedChild(child,\nprefix), the AML layer marks the child as inactive and any outstanding\nprobes that are waiting for data from this child are re-evaluated.\nOn newChild(child, prefix), the AML layer creates space in its data\nstructures for this child.\nFigure 7 shows the time line for the default lazy re-aggregation\nupon reconfiguration. Probes initiated between points 1 and 2 and\nthat are affected by reconfigurations are reevaluated by AML upon\ndetecting the reconfiguration. Probes that complete or start between\npoints 2 and 8 may return stale answers.\nOn-demand Re-aggregation: The default lazy aggregation\nscheme lazily propagates the old updates in the system.\nAdditionally, using up and down knobs in the Probe API, applications can\nforce on-demand fast re-aggregation of updates to avoid staleness\nin the face of reconfigurations. In particular, if an application\ndetects or suspects an answer as stale, then it can re-issue the probe\nincreasing the up and down parameters to force the refreshing of\nthe cached data. Note that this strategy will be useful only after the\nDHT adaptation is completed (Point 6 on the time line in Figure 7).\nReplication in Space: Replication in space is more\nchallenging in our system than in a DHT file location application because\nreplication in space can be achieved easily in the latter by just\nreplicating the root node\"s contents. In our system, however, all internal\nnodes have to be replicated along with the root.\nIn our system, applications control replication in space using up\nand down knobs in the Install API; with large up and down values,\naggregates at the intermediate virtual nodes are propagated to more\nnodes in the system. By reducing the number of nodes that have to\nbe accessed to answer a probe, applications can reduce the\nprobability of incorrect results occurring due to the failure of nodes that\ndo not contribute to the aggregate. For example, in a file location\napplication, using a non-zero positive down parameter ensures that\na file\"s global aggregate is replicated on nodes other than the root.\n0.1\n1\n10\n100\n1000\n10000\n0.0001 0.01 1 100 10000\nAvg.numberofmessagesperoperation\nRead to Write ratio\nUpdate-All\nUp=ALL, Down=9\nUp=ALL, Down=6\nUpdate-Up\nUpdate-Local\nUp=2, Down=0\nUp=5, Down=0\nFigure 8: Flexibility of our approach. With different UP and\nDOWN values in a network of 4096 nodes for different\nreadwrite ratios.\nProbes for the file location can then be answered without accessing\nthe root; hence they are not affected by the failure of the root.\nHowever, note that this technique is not appropriate in some cases. An\naggregated value in file location system is valid as long as the node\nhosting the file is active, irrespective of the status of other nodes\nin the system; whereas an application that counts the number of\nmachines in a system may receive incorrect results irrespective of\nthe replication. If reconfigurations are only transient (like a node\ntemporarily not responding due to a burst of load), the replicated\naggregate closely or correctly resembles the current state.\n7. EVALUATION\nWe have implemented a prototype of SDIMS in Java using the\nFreePastry framework [32] and performed large-scale simulation\nexperiments and micro-benchmark experiments on two real\nnetworks: 187 machines in the department and 69 machines on the\nPlanetLab [27] testbed. In all experiments, we use static up and\ndown values and turn off dynamic adaptation. Our evaluation\nsupports four main conclusions. First, flexible API provides different\npropagation strategies that minimize communication resources at\ndifferent read-to-write ratios. For example, in our simulation we\nobserve Update-Local to be efficient for read-to-write ratios\nbelow 0.0001, Update-Up around 1, and Update-All above 50000.\nSecond, our system is scalable with respect to both nodes and\nattributes. In particular, we find that the maximum node stress in\nour system is an order lower than observed with an Update-All,\ngossiping approach. Third, in contrast to unmodified Pastry which\nviolates path convergence property in upto 14% cases, our system\nconforms to the property. Fourth, the system is robust to\nreconfigurations and adapts to failures with in a few seconds.\n7.1 Simulation Experiments\nFlexibility and Scalability: A major innovation of our system\nis its ability to provide flexible computation and propagation of\naggregates. In Figure 8, we demonstrate the flexibility exposed by the\naggregation API explained in Section 3. We simulate a system with\n4096 nodes arranged in a domain hierarchy with branching factor\n(bf) of 16 and install several attributes with different up and down\nparameters. We plot the average number of messages per operation\nincurred for a wide range of read-to-write ratios of the operations\nfor different attributes. Simulations with other sizes of networks\nwith different branching factors reveal similar results. This graph\nclearly demonstrates the benefit of supporting a wide range of\ncomputation and propagation strategies. Although having a small UP\n386\n1\n10\n100\n1000\n10000\n100000\n1e+06\n1e+07\n1 10 100 1000 10000 100000\nMaximumNodeStress\nNumber of attributes installed\nGossip 256\nGossip 4096\nGossip 65536\nDHT 256\nDHT 4096\nDHT 65536\nFigure 9: Max node stress for a gossiping approach vs. ADHT\nbased approach for different number of nodes with increasing\nnumber of sparse attributes.\nvalue is efficient for attributes with low read-to-write ratios (write\ndominated applications), the probe latency, when reads do occur,\nmay be high since the probe needs to aggregate the data from all\nthe nodes that did not send their aggregate up. Conversely,\napplications that wish to improve probe overheads or latencies can increase\ntheir UP and DOWN propagation at a potential cost of increase in\nwrite overheads.\nCompared to an existing Update-all single aggregation tree\napproach [38], scalability in SDIMS comes from (1) leveraging DHTs\nto form multiple aggregation trees that split the load across nodes\nand (2) flexible propagation that avoids propagation of all updates\nto all nodes. Figure 9 demonstrates the SDIMS\"s scalability with\nnodes and attributes. For this experiment, we build a simulator to\nsimulate both Astrolabe [38] (a gossiping, Update-All approach)\nand our system for an increasing number of sparse attributes. Each\nattribute corresponds to the membership in a multicast session with\na small number of participants. For this experiment, the session\nsize is set to 8, the branching factor is set to 16, the propagation\nmode for SDIMS is Update-Up, and the participant nodes perform\ncontinuous probes for the global aggregate value. We plot the\nmaximum node stress (in terms of messages) observed in both schemes\nfor different sized networks with increasing number of sessions\nwhen the participant of each session performs an update operation.\nClearly, the DHT based scheme is more scalable with respect to\nattributes than an Update-all gossiping scheme. Observe that at some\nconstant number of attributes, as the number of nodes increase in\nthe system, the maximum node stress increases in the gossiping\napproach, while it decreases in our approach as the load of\naggregation is spread across more nodes. Simulations with other session\nsizes (4 and 16) yield similar results.\nAdministrative Hierarchy and Robustness: Although the\nrouting protocol of ADHT might lead to an increased number of\nhops to reach the root for a key as compared to original Pastry, the\nalgorithm conforms to the path convergence and locality properties\nand thus provides administrative isolation property. In Figure 10,\nwe quantify the increased path length by comparisons with\nunmodified Pastry for different sized networks with different branching\nfactors of the domain hierarchy tree. To quantify the path\nconvergence property, we perform simulations with a large number of\nprobe pairs - each pair probing for a random key starting from two\nrandomly chosen nodes. In Figure 11, we plot the percentage of\nprobe pairs for unmodified pastry that do not conform to the path\nconvergence property. When the branching factor is low, the\ndomain hierarchy tree is deeper resulting in a large difference between\n0\n1\n2\n3\n4\n5\n6\n7\n10 100 1000 10000 100000\nPathLength\nNumber of Nodes\nADHT bf=4\nADHT bf=16\nADHT bf=64\nPASTRY bf=4,16,64\nFigure 10: Average path length to root in Pastry versus ADHT\nfor different branching factors. Note that all lines\ncorresponding to Pastry overlap.\n0\n2\n4\n6\n8\n10\n12\n14\n16\n10 100 1000 10000 100000\nPercentageofviolations\nNumber of Nodes\nbf=4\nbf=16\nbf=64\nFigure 11: Percentage of probe pairs whose paths to the root\ndid not conform to the path convergence property with Pastry.\nU\npdate-All\nU\npdate-U\np\nU\npdate-Local\n0\n200\n400\n600\n800\nLatency(inms)\nAverage Latency\nU\npdate-All\nU\npdate-U\np\nU\npdate-Local\n0\n1000\n2000\n3000\nLatency(inms) Average Latency\n(a) (b)\nFigure 12: Latency of probes for aggregate at global root level\nwith three different modes of aggregate propagation on (a)\ndepartment machines, and (b) PlanetLab machines\nPastry and ADHT in the average path length; but it is at these small\ndomain sizes, that the path convergence fails more often with the\noriginal Pastry.\n7.2 Testbed experiments\nWe run our prototype on 180 department machines (some\nmachines ran multiple node instances, so this configuration has a\ntotal of 283 SDIMS nodes) and also on 69 machines of the\nPlanetLab [27] testbed. We measure the performance of our system with\ntwo micro-benchmarks. In the first micro-benchmark, we install\nthree aggregation functions of types Update-Local, Update-Up, and\nUpdate-All, perform update operation on all nodes for all three\naggregation functions, and measure the latencies incurred by probes\nfor the global aggregate from all nodes in the system. Figure 12\n387\n0\n20\n40\n60\n80\n100\n120\n140\n0 5 10 15 20 25\n2700\n2720\n2740\n2760\n2780\n2800\n2820\n2840\nLatency(inms)\nValuesObserved\nTime(in sec)\nValues\nlatency\nNode Killed\nFigure 13: Micro-benchmark on department network showing\nthe behavior of the probes from a single node when failures are\nhappening at some other nodes. All 283 nodes assign a value of\n10 to the attribute.\n10\n100\n1000\n10000\n100000\n0 50 100 150 200 250 300 350 400 450 500\n500\n550\n600\n650\n700\nLatency(inms)\nValuesObserved\nTime(in sec)\nValues\nlatency\nNode Killed\nFigure 14: Probe performance during failures on 69 machines\nof PlanetLab testbed\nshows the observed latencies for both testbeds. Notice that the\nlatency in Update-Local is high compared to the Update-UP policy.\nThis is because latency in Update-Local is affected by the presence\nof even a single slow machine or a single machine with a high\nlatency network connection.\nIn the second benchmark, we examine robustness. We install one\naggregation function of type Update-Up that performs sum\noperation on an integer valued attribute. Each node updates the attribute\nwith the value 10. Then we monitor the latencies and results\nreturned on the probe operation for global aggregate on one chosen\nnode, while we kill some nodes after every few probes. Figure 13\nshows the results on the departmental testbed. Due to the nature\nof the testbed (machines in a department), there is little change in\nthe latencies even in the face of reconfigurations. In Figure 14, we\npresent the results of the experiment on PlanetLab testbed. The\nroot node of the aggregation tree is terminated after about 275\nseconds. There is a 5X increase in the latencies after the death of the\ninitial root node as a more distant node becomes the root node after\nrepairs. In both experiments, the values returned on probes start\nreflecting the correct situation within a short time after the failures.\nFrom both the testbed benchmark experiments and the\nsimulation experiments on flexibility and scalability, we conclude that (1)\nthe flexibility provided by SDIMS allows applications to tradeoff\nread-write overheads (Figure 8), read latency, and sensitivity to\nslow machines (Figure 12), (2) a good default aggregation\nstrategy is Update-Up which has moderate overheads on both reads and\nwrites (Figure 8), has moderate read latencies (Figure 12), and is\nscalable with respect to both nodes and attributes (Figure 9), and\n(3) small domain sizes are the cases where DHT algorithms fail to\nprovide path convergence more often and SDIMS ensures path\nconvergence with only a moderate increase in path lengths (Figure 11).\n7.3 Applications\nSDIMS is designed as a general distributed monitoring and\ncontrol infrastructure for a broad range of applications. Above, we\ndiscuss some simple microbenchmarks including a multicast\nmembership service and a calculate-sum function. Van Renesse et al. [38]\nprovide detailed examples of how such a service can be used for a\npeer-to-peer caching directory, a data-diffusion service, a\npublishsubscribe system, barrier synchronization, and voting.\nAdditionally, we have initial experience using SDIMS to construct two\nsignificant applications: the control plane for a large-scale distributed\nfile system [12] and a network monitor for identifying heavy\nhitters that consume excess resources.\nDistributed file system control: The PRACTI (Partial\nReplication, Arbitrary Consistency, Topology Independence) replication\nsystem provides a set of mechanisms for data replication over which\narbitrary control policies can be layered. We use SDIMS to provide\nseveral key functions in order to create a file system over the\nlowlevel PRACTI mechanisms.\nFirst, nodes use SDIMS as a directory to handle read misses.\nWhen a node n receives an object o, it updates the (ReadDir, o)\nattribute with the value n; when n discards o from its local store,\nit resets (ReadDir, o) to NULL. At each virtual node, the ReadDir\naggregation function simply selects a random non-null child value\n(if any) and we use the Update-Up policy for propagating updates.\nFinally, to locate a nearby copy of an object o, a node n1 issues a\nseries of probe requests for the (ReadDir, o) attribute, starting with\nlevel = 1 and increasing the level value with each repeated probe\nrequest until a non-null node ID n2 is returned. n1 then sends a\ndemand read request to n2, and n2 sends the data if it has it.\nConversely, if n2 does not have a copy of o, it sends a nack to n1,\nand n1 issues a retry probe with the down parameter set to a value\nlarger than used in the previous probe in order to force on-demand\nre-aggregation, which will yield a fresher value for the retry.\nSecond, nodes subscribe to invalidations and updates to interest\nsets of files, and nodes use SDIMS to set up and maintain\nperinterest-set network-topology-sensitive spanning trees for\npropagating this information. To subscribe to invalidations for interest\nset i, a node n1 first updates the (Inval, i) attribute with its\nidentity n1, and the aggregation function at each virtual node selects\none non-null child value. Finally, n1 probes increasing levels of the\nthe (Inval, i) attribute until it finds the first node n2 = n1; n1 then\nuses n2 as its parent in the spanning tree. n1 also issues a\ncontinuous probe for this attribute at this level so that it is notified of any\nchange to its spanning tree parent. Spanning trees for streams of\npushed updates are maintained in a similar manner.\nIn the future, we plan to use SDIMS for at least two additional\nservices within this replication system. First, we plan to use SDIMS\nto track the read and write rates to different objects; prefetch\nalgorithms will use this information to prioritize replication [40, 41].\nSecond, we plan to track the ranges of invalidation sequence\nnumbers seen by each node for each interest set in order to augment\nthe spanning trees described above with additional hole filling to\nallow nodes to locate specific invalidations they have missed.\nOverall, our initial experience with using SDIMS for the\nPRACTII replication system suggests that (1) the general aggregation\ninterface provided by SDIMS simplifies the construction of\ndistributed applications-given the low-level PRACTI mechanisms,\n388\nwe were able to construct a basic file system that uses SDIMS for\nseveral distinct control tasks in under two weeks and (2) the weak\nconsistency guarantees provided by SDIMS meet the requirements\nof this application-each node\"s controller effectively treats\ninformation from SDIMS as hints, and if a contacted node does not have\nthe needed data, the controller retries, using SDIMS on-demand\nreaggregation to obtain a fresher hint.\nDistributed heavy hitter problem: The goal of the heavy hitter\nproblem is to identify network sources, destinations, or protocols\nthat account for significant or unusual amounts of traffic. As noted\nby Estan et al. [13], this information is useful for a variety of\napplications such as intrusion detection (e.g., port scanning), denial of\nservice detection, worm detection and tracking, fair network\nallocation, and network maintenance. Significant work has been done\non developing high-performance stream-processing algorithms for\nidentifying heavy hitters at one router, but this is just a first step;\nideally these applications would like not just one router\"s views of\nthe heavy hitters but an aggregate view.\nWe use SDIMS to allow local information about heavy hitters\nto be pooled into a view of global heavy hitters. For each\ndestination IP address IPx, a node updates the attribute (DestBW,IPx)\nwith the number of bytes sent to IPx in the last time window. The\naggregation function for attribute type DestBW is installed with the\nUpdate-UP strategy and simply adds the values from child nodes.\nNodes perform continuous probe for global aggregate of the\nattribute and raise an alarm when the global aggregate value goes\nabove a specified limit. Note that only nodes sending data to a\nparticular IP address perform probes for the corresponding attribute.\nAlso note that techniques from [25] can be extended to hierarchical\ncase to tradeoff precision for communication bandwidth.\n8. RELATED WORK\nThe aggregation abstraction we use in our work is heavily\ninfluenced by the Astrolabe [38] project. Astrolabe adopts a\nPropagateAll and unstructured gossiping techniques to attain robustness [5].\nHowever, any gossiping scheme requires aggressive replication of\nthe aggregates. While such aggressive replication is efficient for\nread-dominated attributes, it incurs high message cost for attributes\nwith a small read-to-write ratio. Our approach provides a\nflexible API for applications to set propagation rules according to their\nread-to-write ratios. Other closely related projects include\nWillow [39], Cone [4], DASIS [1], and SOMO [45]. Willow, DASIS\nand SOMO build a single tree for aggregation. Cone builds a tree\nper attribute and requires a total order on the attribute values.\nSeveral academic [15, 21, 42] and commercial [37] distributed\nmonitoring systems have been designed to monitor the status of\nlarge networked systems. Some of them are centralized where all\nthe monitoring data is collected and analyzed at a central host.\nGanglia [15, 23] uses a hierarchical system where the attributes are\nreplicated within clusters using multicast and then cluster\naggregates are further aggregated along a single tree. Sophia [42] is\na distributed monitoring system designed with a declarative logic\nprogramming model where the location of query execution is both\nexplicit in the language and can be calculated during evaluation.\nThis research is complementary to our work. TAG [21] collects\ninformation from a large number of sensors along a single tree.\nThe observation that DHTs internally provide a scalable forest\nof reduction trees is not new. Plaxton et al.\"s [28] original paper\ndescribes not a DHT, but a system for hierarchically aggregating and\nquerying object location data in order to route requests to nearby\ncopies of objects. Many systems-building upon both Plaxton\"s\nbit-correcting strategy [32, 46] and upon other strategies [24, 29,\n35]-have chosen to hide this power and export a simple and\ngeneral distributed hash table abstraction as a useful building block for\na broad range of distributed applications. Some of these systems\ninternally make use of the reduction forest not only for routing but\nalso for caching [32], but for simplicity, these systems do not\ngenerally export this powerful functionality in their external interface.\nOur goal is to develop and expose the internal reduction forest of\nDHTs as a similarly general and useful abstraction.\nAlthough object location is a predominant target application for\nDHTs, several other applications like multicast [8, 9, 33, 36] and\nDNS [11] are also built using DHTs. All these systems implicitly\nperform aggregation on some attribute, and each one of them must\nbe designed to handle any reconfigurations in the underlying DHT.\nWith the aggregation abstraction provided by our system, designing\nand building of such applications becomes easier.\nInternal DHT trees typically do not satisfy domain locality\nproperties required in our system. Castro et al. [7] and Gummadi et\nal. [17] point out the importance of path convergence from the\nperspective of achieving efficiency and investigate the performance of\nPastry and other DHT algorithms, respectively. SkipNet [18]\nprovides domain restricted routing where a key search is limited to the\nspecified domain. This interface can be used to ensure path\nconvergence by searching in the lowest domain and moving up to the next\ndomain when the search reaches the root in the current domain.\nAlthough this strategy guarantees path convergence, it loses the\naggregation tree abstraction property of DHTs as the domain constrained\nrouting might touch a node more than once (as it searches forward\nand then backward to stay within a domain).\n9. CONCLUSIONS\nThis paper presents a Scalable Distributed Information\nManagement System (SDIMS) that aggregates information in large-scale\nnetworked systems and that can serve as a basic building block\nfor a broad range of applications. For large scale systems,\nhierarchical aggregation is a fundamental abstraction for scalability.\nWe build our system by extending ideas from Astrolabe and DHTs\nto achieve (i) scalability with respect to both nodes and attributes\nthrough a new aggregation abstraction that helps leverage DHT\"s\ninternal trees for aggregation, (ii) flexibility through a simple API\nthat lets applications control propagation of reads and writes, (iii)\nadministrative isolation through simple augmentations of current\nDHT algorithms, and (iv) robustness to node and network\nreconfigurations through lazy reaggregation, on-demand reaggregation,\nand tunable spatial replication.\nAcknowlegements\nWe are grateful to J.C. Browne, Robert van Renessee, Amin\nVahdat, Jay Lepreau, and the anonymous reviewers for their helpful\ncomments on this work.\n10. REFERENCES\n[1] K. Albrecht, R. Arnold, M. Gahwiler, and R. Wattenhofer.\nJoin and Leave in Peer-to-Peer Systems: The DASIS\napproach. Technical report, CS, ETH Zurich, 2003.\n[2] G. Back, W. H. Hsieh, and J. Lepreau. Processes in KaffeOS:\nIsolation, Resource Management, and Sharing in Java. In\nProc. OSDI, Oct 2000.\n[3] G. Banga, P. Druschel, and J. Mogul. Resource Containers:\nA New Facility for Resource Management in Server\nSystems. In OSDI99, Feb. 1999.\n[4] R. Bhagwan, P. Mahadevan, G. Varghese, and G. M. Voelker.\nCone: A Distributed Heap-Based Approach to Resource\nSelection. Technical Report CS2004-0784, UCSD, 2004.\n389\n[5] K. P. Birman. The Surprising Power of Epidemic\nCommunication. In Proceedings of FuDiCo, 2003.\n[6] B. Bloom. Space/time tradeoffs in hash coding with\nallowable errors. Comm. of the ACM, 13(7):422-425, 1970.\n[7] M. Castro, P. Druschel, Y. C. Hu, and A. Rowstron.\nExploiting Network Proximity in Peer-to-Peer Overlay\nNetworks. Technical Report MSR-TR-2002-82, MSR.\n[8] M. Castro, P. Druschel, A.-M. Kermarrec, A. Nandi,\nA. Rowstron, and A. Singh. SplitStream: High-bandwidth\nMulticast in a Cooperative Environment. In SOSP, 2003.\n[9] M. Castro, P. Druschel, A.-M. Kermarrec, and A. Rowstron.\nSCRIBE: A Large-scale and Decentralised Application-level\nMulticast Infrastructure. IEEE JSAC (Special issue on\nNetwork Support for Multicast Communications), 2002.\n[10] J. Challenger, P. Dantzig, and A. Iyengar. A scalable and\nhighly available system for serving dynamic data at\nfrequently accessed web sites. In In Proceedings of\nACM/IEEE, Supercomputing \"98 (SC98), Nov. 1998.\n[11] R. Cox, A. Muthitacharoen, and R. T. Morris. Serving DNS\nusing a Peer-to-Peer Lookup Service. In IPTPS, 2002.\n[12] M. Dahlin, L. Gao, A. Nayate, A. Venkataramani,\nP. Yalagandula, and J. Zheng. PRACTI replication for\nlarge-scale systems. Technical Report TR-04-28, The\nUniversity of Texas at Austin, 2004.\n[13] C. Estan, G. Varghese, and M. Fisk. Bitmap algorithms for\ncounting active flows on high speed links. In Internet\nMeasurement Conference 2003, 2003.\n[14] Y. Fu, J. Chase, B. Chun, S. Schwab, and A. Vahdat.\nSHARP: An architecture for secure resource peering. In\nProc. SOSP, Oct. 2003.\n[15] Ganglia: Distributed Monitoring and Execution System.\nhttp://ganglia.sourceforge.net.\n[16] S. Gribble, A. Halevy, Z. Ives, M. Rodrig, and D. Suciu.\nWhat Can Peer-to-Peer Do for Databases, and Vice Versa? In\nProceedings of the WebDB, 2001.\n[17] K. Gummadi, R. Gummadi, S. D. Gribble, S. Ratnasamy,\nS. Shenker, and I. Stoica. The Impact of DHT Routing\nGeometry on Resilience and Proximity. In SIGCOMM, 2003.\n[18] N. J. A. Harvey, M. B. Jones, S. Saroiu, M. Theimer, and\nA. Wolman. SkipNet: A Scalable Overlay Network with\nPractical Locality Properties. In USITS, March 2003.\n[19] R. Huebsch, J. M. Hellerstein, N. Lanham, B. T. Loo,\nS. Shenker, and I. Stoica. Querying the Internet with PIER.\nIn Proceedings of the VLDB Conference, May 2003.\n[20] C. Intanagonwiwat, R. Govindan, and D. Estrin. Directed\ndiffusion: a scalable and robust communication paradigm for\nsensor networks. In MobiCom, 2000.\n[21] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and\nW. Hong. TAG: a Tiny AGgregation Service for ad-hoc\nSensor Networks. In OSDI, 2002.\n[22] D. Malkhi. Dynamic Lookup Networks. In FuDiCo, 2002.\n[23] M. L. Massie, B. N. Chun, and D. E. Culler. The ganglia\ndistributed monitoring system: Design, implementation, and\nexperience. In submission.\n[24] P. Maymounkov and D. Mazieres. Kademlia: A Peer-to-peer\nInformation System Based on the XOR Metric. In\nProceesings of the IPTPS, March 2002.\n[25] C. Olston and J. Widom. Offering a precision-performance\ntradeoff for aggregation queries over replicated data. In\nVLDB, pages 144-155, Sept. 2000.\n[26] K. Petersen, M. Spreitzer, D. Terry, M. Theimer, and\nA. Demers. Flexible Update Propagation for Weakly\nConsistent Replication. In Proc. SOSP, Oct. 1997.\n[27] Planetlab. http://www.planet-lab.org.\n[28] C. G. Plaxton, R. Rajaraman, and A. W. Richa. Accessing\nNearby Copies of Replicated Objects in a Distributed\nEnvironment. In ACM SPAA, 1997.\n[29] S. Ratnasamy, P. Francis, M. Handley, R. Karp, and\nS. Shenker. A Scalable Content Addressable Network. In\nProceedings of ACM SIGCOMM, 2001.\n[30] S. Ratnasamy, S. Shenker, and I. Stoica. Routing Algorithms\nfor DHTs: Some Open Questions. In IPTPS, March 2002.\n[31] T. Roscoe, R. Mortier, P. Jardetzky, and S. Hand. InfoSpect:\nUsing a Logic Language for System Health Monitoring in\nDistributed Systems. In Proceedings of the SIGOPS\nEuropean Workshop, 2002.\n[32] A. Rowstron and P. Druschel. Pastry: Scalable, Distributed\nObject Location and Routing for Large-scale Peer-to-peer\nSystems. In Middleware, 2001.\n[33] S.Ratnasamy, M.Handley, R.Karp, and S.Shenker.\nApplication-level Multicast using Content-addressable\nNetworks. In Proceedings of the NGC, November 2001.\n[34] W. Stallings. SNMP, SNMPv2, and CMIP. Addison-Wesley,\n1993.\n[35] I. Stoica, R. Morris, D. Karger, F. Kaashoek, and\nH. Balakrishnan. Chord: A scalable Peer-To-Peer lookup\nservice for internet applications. In ACM SIGCOMM, 2001.\n[36] S.Zhuang, B.Zhao, A.Joseph, R.Katz, and J.Kubiatowicz.\nBayeux: An Architecture for Scalable and Fault-tolerant\nWide-Area Data Dissemination. In NOSSDAV, 2001.\n[37] IBM Tivoli Monitoring.\nwww.ibm.com/software/tivoli/products/monitor.\n[38] R. VanRenesse, K. P. Birman, and W. Vogels. Astrolabe: A\nRobust and Scalable Technology for Distributed System\nMonitoring, Management, and Data Mining. TOCS, 2003.\n[39] R. VanRenesse and A. Bozdog. Willow: DHT, Aggregation,\nand Publish/Subscribe in One Protocol. In IPTPS, 2004.\n[40] A. Venkataramani, P. Weidmann, and M. Dahlin. Bandwidth\nconstrained placement in a wan. In PODC, Aug. 2001.\n[41] A. Venkataramani, P. Yalagandula, R. Kokku, S. Sharif, and\nM. Dahlin. Potential costs and benefits of long-term\nprefetching for content-distribution. Elsevier Computer\nCommunications, 25(4):367-375, Mar. 2002.\n[42] M. Wawrzoniak, L. Peterson, and T. Roscoe. Sophia: An\nInformation Plane for Networked Systems. In HotNets-II,\n2003.\n[43] R. Wolski, N. Spring, and J. Hayes. The network weather\nservice: A distributed resource performance forecasting\nservice for metacomputing. Journal of Future Generation\nComputing Systems, 15(5-6):757-768, Oct 1999.\n[44] P. Yalagandula and M. Dahlin. SDIMS: A scalable\ndistributed information management system. Technical\nReport TR-03-47, Dept. of Computer Sciences, UT Austin,\nSep 2003.\n[45] Z. Zhang, S.-M. Shi, and J. Zhu. SOMO: Self-Organized\nMetadata Overlay for Resource Management in P2P DHT. In\nIPTPS, 2003.\n[46] B. Y. Zhao, J. D. Kubiatowicz, and A. D. Joseph. Tapestry:\nAn Infrastructure for Fault-tolerant Wide-area Location and\nRouting. Technical Report UCB/CSD-01-1141, UC\nBerkeley, Apr. 2001.\n390", "keywords": "autonomous dht;temporal heterogeneity;administrative isolation;distribute hash table;lazy re-aggregation;write-dominated attribute;large-scale networked system;update-upk-downj strategy;information management system;freepastry framework;tunable spatial replication;distributed hash table;aggregation management layer;distributed operating system backbone;availability;read-dominated attribute;network system monitor;virtual node;networked system monitoring;eventual consistency"} {"name": "train_C-61", "title": "Authority Assignment in Distributed Multi-Player Proxy-based Games", "abstract": "We present a proxy-based gaming architecture and authority assignment within this architecture that can lead to better game playing experience in Massively Multi-player Online games. The proposed game architecture consists of distributed game clients that connect to game proxies (referred to as communication proxies) which forward game related messages from the clients to one or more game servers. Unlike proxy-based architectures that have been proposed in the literature where the proxies replicate all of the game state, the communication proxies in the proposed architecture support clients that are in proximity to it in the physical network and maintain information about selected portions of the game space that are relevant only to the clients that they support. Using this architecture, we propose an authority assignment mechanism that divides the authority for deciding the outcome of different actions/events that occur within the game between client and servers on a per action/event basis. We show that such division of authority leads to a smoother game playing experience by implementing this mechanism in a massively multi-player online game called RPGQuest. In addition, we argue that cheat detection techniques can be easily implemented at the communication proxies if they are made aware of the game-play mechanics.", "fulltext": "1. INTRODUCTION\nIn Massively Multi-player On-line Games (MMOG), game\nclients who are positioned across the Internet connect to\na game server to interact with other clients in order to be\npart of the game. In current architectures, these\ninteractions are direct in that the game clients and the servers\nexchange game messages with each other. In addition, current\nMMOGs delegate all authority to the game server to make\ndecisions about the results pertaining to the actions that\ngame clients take and also to decide upon the result of other\ngame related events. Such centralized authority has been\nimplemented with the claim that this improves the security\nand consistency required in a gaming environment.\nA number of works have shown the effect of network latency\non distributed multi-player games [1, 2, 3, 4]. It has been\nshown that network latency has real impact on practical\ngame playing experience [3, 5]. Some types of games can\nfunction quite well even in the presence of large delays. For\nexample, [4] shows that in a modern RPG called Everquest\n2, the breakpoint of the game when adding artificial\nlatency was 1250ms. This is accounted to the fact that the\ncombat system used in Everquest 2 is queueing based and\nhas very low interaction. For example, a player queues up\n4 or 5 spells they wish to cast, each of these spells take 1-2\nseconds to actually perform, giving the server plenty of time\nto validate these actions. But there are other games such as\nFPS games that break even in the presence of moderate\nnetwork latencies [3, 5]. Latency compensation techniques have\nbeen proposed to alleviate the effect of latency [1, 6, 7] but\nit is obvious that if MMOGs are to increase in\ninteractivity and speed, more architectures will have to be developed\nthat address responsiveness, accuracy and consistency of the\ngamestate.\nIn this paper, we propose two important features that would\nmake game playing within MMOGs more responsive for\nmovement and scalable. First, we propose that centralized\nserver-based architectures be made hierarchical through the\nintroduction of communication proxies so that game updates\nmade by clients that are time sensitive, such as movement,\ncan be more efficiently distributed to other players within\ntheir game-space. Second, we propose that assignment of\nauthority in terms of who makes the decision on client\nactions such as object pickups and hits, and collisions between\nplayers, be distributed between the clients and the servers in\norder to distribute the computing load away from the central\nserver. In order to move towards more complex real-time\nnetworked games, we believe that definitions of authority\nmust be refined.\nMost currently implemented MMOGs have game servers\nthat have almost absolute authority. We argue that there is\nno single consistent view of the virtual game space that can\nbe maintained on any one component within a network that\nhas significant latency, such as the one that many MMOG\nplayers would experience. We believe that in most cases, the\nclient with the most accurate view of an entity is the best\nsuited to make decisions for that entity when the causality\nof that action will not immediately affect any other\nplayers. In this paper we define what it means to have authority\nwithin the context of events and objects in a virtual game\nspace. We then show the benefits of delegating authority\nfor different actions and game events between the clients\nand server.\nIn our model, the game space consists of game clients\n(representing the players) and objects that they control. We\ndivide the client actions and game events (we will\ncollectively refer to these as events) such as collisions, hits etc.\ninto three different categories, a) events for which the game\nclient has absolute authority, b) events for which the game\nserver has absolute authority, and c) events for which the\nauthority changes dynamically from client to the server and\nvice-versa. Depending on who has the authority, that\nentity will make decisions on the events that happen within a\ngame space. We propose that authority for all decisions that\npertain to a single player or object in the game that neither\naffects the other players or objects, nor are affected by the\nactions of other players be delegated to that player\"s game\nclient. These type of decisions would include collision\ndetection with static objects within the virtual game space and\nhit detection with linear path bullets (whose trajectory is\nfixed and does not change with time) fired by other players.\nAuthority for decisions that could be affected by two or more\nplayers should be delegated to the impartial central server,\nin some cases, to ensure that no conflicts occur and in other\ncases can be delegated to the clients responsible for those\nplayers. For example, collision detection of two players that\ncollide with each other and hit detection of non-linear\nbullets (that changes trajectory with time) should be delegated\nto the server. Decision on events such as item pickup (for\nexample, picking up items in a game to accumulate points)\nshould be delegated to a server if there are multiple\nplayers within close proximity of an item and any one of the\nplayers could succeed in picking the item; for item pick-up\ncontention where the client realizes that no other player,\nexcept its own player, is within a certain range of the item,\nthe client could be delegated the responsibility to claim the\nitem. The client\"s decision can always be accurately verified\nby the server.\nIn summary, we argue that while current authority models\nthat only delegate responsibility to the server to make\nauthoritative decisions on events is more secure than allowing\nthe clients to make the decisions, these types of models add\nundesirable delays to events that could very well be decided\nby the clients without any inconsistency being introduced\ninto the game. As networked games become more complex,\nour architecture will become more applicable. This\narchitecture is applicable for massively multiplayer games where\nthe speed and accuracy of game-play are a major concern\nwhile consistency between player game-states is still desired.\nWe propose that a mixed authority assignment mechanism\nsuch as the one outlined above be implemented in high\ninteraction MMOGs.\nOur paper has the following contributions. First we propose\nan architecture that uses communication proxies to enable\nclients to connect to the game server. A communication\nproxy in the proposed architecture maintains information\nonly about portions of the game space that are relevant to\nclients connected to it and is able to process the movement\ninformation of objects and players within these portions.\nIn addition, it is capable of multicasting this information\nonly to a relevant subset of other communication proxies.\nThese functionalities of a communication proxy leads to a\ndecrease in latency of event update and subsequently, better\ngame playing experience. Second, we propose a mixed\nauthority assignment mechanism as described above that\nimproves game playing experience. Third, we implement the\nproposed mixed authority assignment mechanism within a\nMMOG called RPGQuest [8] to validate its viability within\nMMOGs.\nIn Section 2, we describe the proxy-based game\narchitecture in more detail and illustrate its advantages. In\nSection 3, we provide a generic description of the mixed\nauthority assignment mechanism and discuss how it improves\ngame playing experience. In Section 4, we show the\nfeasibility of implementing the proposed mixed authority\nassignment mechanism within existing MMOGs by describing a\nproof-of-concept implementation within an existing MMOG\ncalled RPGQuest. Section 5 discusses related work. In\nSection 6, we present our conclusions and discuss future work.\n2. PROXY-BASED GAME ARCHITECTURE\nMassively Multi-player Online Games (MMOGs) usually\nconsist of a large game space in which the players and\ndifferent game objects reside and move around and interact with\neach-other. State information about the whole game space\ncould be kept in a single central server which we would\nrefer to as a Central-Server Architecture. But to alleviate\nthe heavy demand on the processing for handling the large\nplayer population and the objects in the game in real-time, a\nMMOG is normally implemented using a distributed server\narchitecture where the game space is further sub-divided\ninto regions so that each region has relatively smaller\nnumber of players and objects that can be handled by a single\nserver. In other words, the different game regions are hosted\nby different servers in a distributed fashion. When a player\nmoves out of one game region to another adjacent one, the\nplayer must communicate with a different server (than it was\ncurrently communicating with) hosting the new region. The\nservers communicate with one another to hand off a player\nor an object from one region to another. In this model, the\nplayer on the client machine has to establish multiple\ngaming sessions with different servers so that it can roam in the\nentire game space.\nWe propose a communication proxy based architecture where\na player connects to a (geographically) nearby proxy instead\nof connecting to a central server in the case of a\ncentralserver architecture or to one of the servers in case of\ndis2 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\ntributed server architecture. In the proposed architecture,\nplayers who are close by geographically join a particular\nproxy. The proxy then connects to one or more game servers,\nas needed by the set of players that connect to it and\nmaintains persistent transport sessions with these server. This\nalleviates the problem of each player having to connect\ndirectly to multiple game servers, which can add extra\nconnection setup delay. Introduction of communication proxies\nalso mitigates the overhead of a large number of transport\nsessions that must be managed and reduces required network\nbandwidth [9] and processing at the game servers both with\ncentral server and distributed server architectures. With\ncentral server architectures, communication proxies reduce\nthe overhead at the server by not requiring the server to\nterminate persistent transport sessions from every one of the\nclients. With distributed-server architectures, additionally,\ncommunication proxies eliminate the need for the clients to\nmaintain persistent transport sessions to every one of the\nservers. Figure 1 shows the proposed architecture.\nFigure 1: Architecture of the gaming environment.\nNote that the communication proxies need not be cognizant\nof the game. They host a number of players and inform the\nservers which players are hosted by the proxy in question.\nAlso note that the players hosted by a proxy may not be in\nthe same game space. That is, a proxy hosts players that\nare geographically close to it, but the players themselves\ncan reside in different parts of the game space. The proxy\ncommunicates with the servers responsible for maintaining\nthe game spaces subscribed by the different players. The\nproxies communicate with one another in a peer-to-peer to\nfashion. The responsiveness of the game can be improved\nfor updates that do not need to wait on processing at a\ncentral authority. In this way, information about players can\nbe disseminated faster before even the game server gets to\nknow about it. This definitely improves the responsiveness\nof the game. However, it ignores consistency that is critical\nin MMORPGs. The notion that an architecture such as this\none can still maintain temporal consistency will be discussed\nin detail in Section 3.\nFigure 2 shows and example of the working principle of the\nproposed architecture. Assume that the game space is\ndivided into 9 regions and there are three servers responsible\nfor managing the regions. Server S1 owns regions 1 and 2,\nS2 manages 4, 5, 7, and 8, and S3 is responsible for 3, 6 and\n9.\nFigure 2: An example.\nThere are four communication proxies placed in\ngeographically distant locations. Players a, b, c join proxy P1, proxy P2\nhosts players d, e, f, players g, h are with proxy P3, whereas\nplayers i, j, k, l are with proxy P4. Underneath each player,\nthe figure shows which game region the player is located\ncurrently. For example, players a, b, c are in regions 1, 2, 6,\nrespectively. Therefore, proxy P1 must communicate with\nservers S1 and S3. The reader can verify the rest of the links\nbetween the proxies and the servers.\nPlayers can move within the region and between regions.\nPlayer movement within a region will be tracked by the\nproxy hosting the player and this movement information\n(for example, the player\"s new coordinates) will be\nmulticast to a subset of other relevant communication proxies\ndirectly. At the same time, this information will be sent\nto the server responsible for that region with the indication\nthat this movement has already been communicated to all\nthe other relevant communication proxies (so that the server\ndoes not have to relay this information to all the proxies).\nFor example, if player a moves within region 1, this\ninformation will be communicated by proxy P1 to server S1 and\nmulticast to proxies P3 and P4. Note that proxies that do\nnot keep state information about this region at this point\nin time (because they do not have any clients within that\nregion) such as P2 do not have to receive this movement\ninformation.\nIf a player is at the boundary of a region and moves into\na new region, there are two possibilities. The first\npossibility is that the proxy hosting the player can identify the\nregion into which the player is moving (based on the\ntrajectory information) because it is also maintaining state\ninformation about the new region at that point in time. In\nthis case, the proxy can update movement information\ndirectly at the other relevant communication proxies and also\nsend information to the appropriate server informing of the\nmovement (this may require handoff between servers as we\nwill describe). Consider the scenario where player a is at\nthe boundary of region 1 and proxy P1 can identify that the\nplayer is moving into region 2. Because proxy P1 is currently\nkeeping state information about region 2, it can inform all\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 3\nthe other relevant communication proxies (in this example,\nno other proxy maintains information about region 2 at this\npoint and so no update needs to be sent to any of the other\nproxies) about this movement and then inform the server\nindependently. In this particular case, server S1 is responsible\nfor region 2 as well and so no handoff between servers would\nbe needed. Now consider another scenario where player j\nmoves from region 9 to region 8 and that proxy P4 is able\nto identify this movement. Again, because proxy P4\nmaintains state information about region 8, it can inform any\nother relevant communication proxies (again, none in this\nexample) about this movement. But now, regions 9 and 8\nare managed by different servers (servers S3 and S2\nrespectively) and thus a hand-off between these servers is needed.\nWe propose that in this particular scenario, the handoff be\nmanaged by the proxy P4 itself. When the proxy sends\nmovement update to server S3 (informing the server that\nthe player is moving out of its region), it would also send\na message to server S2 informing the server of the presence\nand location of the player in one of its region.\nIn the intra-region and inter-region scenarios described above,\nthe proxy is able to manage movement related information,\nupdate only the relevant communication proxies about the\nmovement, update the servers with the movement and\nenable handoff of a player between the servers if needed. In\nthis way, the proxy performs movement updates without\ninvolving the servers in any way in this time-critical function\nthereby speeding up the game and improving game\nplaying experience for the players. We consider this the fast\npath for movement update. We envision the proxies to be\njust communication proxies in that they do not know about\nthe workings of specific games. They merely process\nmovement information of players and objects and communicate\nthis information to the other proxies and the servers. If the\nproxies are made more intelligent in that they understand\nmore of the game logic, it is possible for them to quickly\ncheck on claims made by the clients and mitigate cheating.\nThe servers could perform the same functionality but with\nmore delay. Even without being aware of game logic, the\nproxies can provide additional functionalities such as\ntimestamping messages to make the game playing experience\nmore accurate [10] and fair [11].\nThe second possibility that should be considered is when\nplayers move between regions. It is possible that a player\nmoves from one region to another but the proxy that is\nhosting the player is not able to determine the region into\nwhich the player is moving, a) the proxy does not\nmaintain state information about all the regions into which the\nplayer could potentially move, or b) the proxy is not able\nto determine which region the player may move into (even if\nmaintains state information about all these regions). In this\ncase, we propose that the proxy be not responsible for\nmaking the movement decision, but instead communicate the\nmovement indication to the server responsible for the region\nwithin which the player is currently located. The server will\nthen make the movement decision and then a) inform all\nthe proxies including the proxy hosting the player, and b)\ninitiate handoff with another server if the player moves into\na region managed by another server. We consider this the\nslow path for movement update in that the servers need\nto be involved in determining the new position of the player.\nIn the example, assume that player a moves from region 1\nto region 4. Proxy P1 does not maintain state information\nabout region 4 and thus would pass the movement\ninformation to server S1. The server will identify that the player\nhas moved into region 4 and would inform proxy P1 as well\nas proxy P2 (which is the only other proxy that maintains\ninformation about region 4 at this point in time). Server S1\nwill also initiate a handoff of player a with server S2. Proxy\nP1 will now start maintaining state information about\nregion 4 because one of its hosted players, player a has moved\ninto this region. It will do so by requesting and receiving\nthe current state information about region 4 from server S2\nwhich is responsible for this region.\nThus, a proxy architecture allows us to make use of faster\nmovement updates through the fast path through a proxy if\nand when possible as opposed to conventional server-based\narchitectures that always have to use the slow path through\nthe server for movement updates. By selectively maintaining\nrelevant regional game state information at the proxies, we\nare able to achieve this capability in our architecture without\nthe need for maintaining the complete game state at every\nproxy.\n3. ASSIGNMENT OF AUTHORITY\nAs a MMOG is played, the players and the game objects that\nare part of the game, continually change their state. For\nexample, consider a player who owns a tank in a battlefield\ngame. Based on action of the player, the tank changes its\nposition in the game space, the amount of ammunition the\ntank contains changes as it fires at other tanks, the tank\ncollects bonus firing power based on successful hits, etc.\nSimilarly objects in the battlefield, such as flags, buildings etc.\nchange their state when a flag is picked up by a player (i.e.\ntank) or a building is destroyed by firing at it. That is,\nsome decision has to be made on the state of each player\nand object as the game progresses. Note that the state of\na player and/or object can contain several parameters (e.g.,\nposition, amount of ammunition, fuel storage, points\ncollected, etc), and if any of the parameters changes, the state\nof the player/object changes.\nIn a client-server based game, the server controls all the\nplayers and the objects. When a player at a client machine\nmakes a move, the move is transmitted to the server over\nthe network. The server then analyzes the move, and if\nthe move is a valid one, changes the state of the player at\nthe server and informs the client of the change. The client\nsubsequently updates the state of the player and renders\nthe player at the new location. In this case the authority to\nchange the state of the player resides with the server entirely\nand the client simply follows what the server instructs it to\ndo.\nMost of the current first person shooter (FPS) games and\nrole playing games (RPG) fall under this category. In\ncurrent FPS games, much like in RPG games, the client is not\ntrusted. All moves and actions that it makes are validated.\nIf a client detects that it has hit another player with a bullet,\nit proceeds assuming that it is a hit. Meanwhile, an update\nis sent to the server and the server will send back a message\neither affirming or denying that the player was hit. If the\nremote player was not hit, then the client will know that it\n4 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\ndid not actually make the shot. If it did make the hit, an\nupdate will also be sent from the server to the other clients\ninforming them that the other player was hit. A difference\nthat occurs in some RPGs is that they use very dumb client\nprograms. Some RPGs do not maintain state information\nat the client and therefore, cannot predict anything such as\nhits at the client. State information is not maintained\nbecause the client is not trusted with it. In RPGs, a cheating\nplayer with a hacked game client can use state information\nstored at the client to gain an advantage and find things\nsuch as hidden treasure or monsters lurking around the\ncorner. This is a reason why most MMORPGs do not send a\nlot of state information to the client and causes the game\nto be less responsive and have lower interaction game-play\nthan FPS games.\nIn a peer-to-peer game, each peer controls the player and\nobject that it owns. When a player makes a move, the\npeer machine analyzes the move and if it is a valid one,\nchanges the state of the player and places the player in new\nposition. Afterwards, the owner peer informs all other peers\nabout the new state of the player and the rest of the peers\nupdate the state of the player. In this scenario, the authority\nto change the state of the player is given to the owning peer\nand all other peers simply follow the owner.\nFor example, Battle Zone Flag (BzFlag) [12] is a\nmultiplayer client-server game where the client has all authority\nfor making decisions. It was built primarily with LAN play\nin mind and cheating as an afterthought. Clients in BzFlag\nare completely authoritative and when they detect that they\nwere hit by a bullet, they send an update to the server which\nsimply forwards the message along to all other players. The\nserver does no sort of validation.\nEach of the above two traditional approaches has its own set\nof advantages and disadvantages. The first approach, which\nwe will refer to as server authoritative henceforth, uses a\ncentralized method to assign authority. While a centralized\napproach can keep the state of the game (i.e., state of all the\nplayers and objects) consistent across any number of client\nmachines, it suffers from delayed response in game-play as\nany move that a player at the client machine makes must go\nthrough one round-trip delay to the server before it can take\neffect on the client\"s screen. In addition to the round-trip\ndelay, there is also queuing delay in processing the state change\nrequest at the server. This can result in additional\nprocessing delay, and can also bring in severe scalability problems\nif there are large number of clients playing the game. One\ndefinite advantage of the server authoritative approach is\nthat it can easily detect if a client is cheating and can take\nappropriate action to prevent cheating.\nThe peer-to-peer approach, henceforth referred to as client\nauthoritative, can make games very responsive. However,\nit can make the game state inconsistent for a few players\nand tie break (or roll back) has to be performed to bring the\ngame back to a consistent state. Neither tie break nor roll\nback is a desirable feature of online gaming. For example,\nassume that for a game, the goal of each player is to collect\nas many flags as possible from the game space (e.g. BzFlag).\nWhen two players in proximity try to collect the same flag\nat the same time, depending on the algorithm used at the\nclient-side, both clients may determine that it is the winner,\nalthough in reality only one player can pick the flag up. Both\nplayers will see on their screen that it is the winner. This\nmakes the state of the game inconsistent. Ways to recover\nfrom this inconsistency are to give the flag to only one player\n(using some tie break rule) or roll the game back so that the\nplayers can try again. Neither of these two approaches is\na pleasing experience for online gaming. Another problem\nwith client authoritative approach is that of cheating by\nclients as there is no cross checking of the validation of the\nstate changes authorized by the owner client.\nWe propose to use a hybrid approach to assign the authority\ndynamically between the client and the server. That is, we\nassign the authority to the client to make the game\nresponsive, and use the server\"s authority only when the client\"s\nindividual authoritative decisions can make the game state\ninconsistent. By moving the authority of time critical\nupdates to the client, we avoid the added delay caused by\nrequiring the server to validate these updates. For example,\nin the flag pickup game, the clients will be given the\nauthority to pickup flags only when other players are not within\na range that they could imminently pickup a flag. Only\nwhen two or more players are close by so that more than\none player may claim to have picked up a flag, the authority\nfor movement and flag pickup would go to the central server\nso that the game state does not become inconsistent. We\nbelieve that in a large game-space where a player is often\nin a very wide open and sparsely populated area such as\nthose often seen in the game Second Life [13], this hybrid\narchitecture would be very beneficial because of the long\nperiods that the client would have authority to send movement\nupdates for itself. This has two advantages over the\ncentralauthority approach, it distributes the processing load down\nto the clients for the majority of events and it allows for a\nmore responsive game that does not need to wait on a server\nfor validation.\nWe believe that our notion of authority can be used to\ndevelop a globally consistent state model of the evolution of\na game. Fundamentally, the consistent state of the system\nis the one that is defined by the server. However, if local\nauthority is delegated to the client, in this case, the client\"s\nstate is superimposed on the server\"s state to determine the\ncorrect global state. For example, if the client is\nauthoritative with respect to movement of a player, then the\ntrajectory of the player is the true trajectory and must\nreplace the server\"s view of the player\"s trajectory. Note that\nthis could be problematic and lead to temporal\ninconsistency only if, for example, two or more entities are moving\nin the same region and can interact with each other. In\nthis situation, the client authority must revert to the server\nand the sever would then make decisions. Thus, the client\nis only authoritative in situations where there is no\npotential to imminently interact with other players. We believe\nthat in complex MMOGs, when allowing more rapid\nmovement, it will still be the case that local authority is possible\nfor significant spans of game time. Note that it might also\nbe possible to minimize the occurrences of the Dead Man\nShooting problem described in [14]. This could be done by\nallowing the client to be authoritative for more actions such\nas its player\"s own death and disallowing other players from\nmaking preemptive decisions based on a remote player.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 5\nOne reason why the client-server based architecture has gained\npopularity is due to belief that the fastest route to the other\nclients is through the server. While this may be true, we aim\nto create a new architecture where decisions do not always\nhave to be made at the game server and the fastest route to\na client is actually through a communication proxy located\nclose to the client. That is, the shortest distance in our\narchitecture is not through the game server but through the\ncommunication proxy. After a client makes an action such\nas movement, it will simultaneously distribute it directly to\nthe clients and the game server by way of the\ncommunications proxy. We note that our architecture however is not\npractical for a game where game players setup their own\nservers in an ad-hoc fashion and do not have access to\nproxies at the various ISPs. This proxy and distributed authority\narchitecture can be used to its full potential only when the\nproxies can be placed at strategic places within the main\nISPs and evenly distributed geographically.\nOur game architecture does not assume that the client is\nnot to be trusted. We are designing our architecture on the\nfact that there will be sufficient cheat deterring and\ndetection mechanisms present so that it will be both undesirable\nand very difficult to cheat [15]. In our proposed approach,\nwe can make the games cheat resilient by using the\nproxybased architecture when client authoritative decisions take\nplace. In order to achieve this, the proxies have to be game\ncognizant so that decisions made by a client can be cross\nchecked by a proxy that the client connects to. For\nexample, assume that in a game a plane controlled by a client\nmoves in the game space. It is not possible for the plane to\ngo through a building unharmed. In a client authoritative\nmode, it is possible for the client to cheat by maneuvering\nthe plane through a building and claiming the plane to be\nunharmed. However, when such move is published by the\nclient, the proxy, being aware of the game space that the\nplane is in, can quickly check that the client has misused\nthe authority and then can block such move. This allows us\nto distribute authority to make decisions about the clients.\nIn the following section we use a multiplayer game called\nRPGQuest to implement different authoritative schemes and\ndiscuss our experience with the implementation. Our\nimplementation shows the viability of our proposed solution.\n4. IMPLEMENTATION EXPERIENCE\nWe have experimented with the authority assignment\nmechanism described in the last section by implementing the\nmechanisms in a game called RPGQuest. A screen shot from\nthis game is shown in Figure 3. The purpose of the\nimplementation is to test its feasibility in a real game. RPGQuest\nis a basic first person game where the player can move\naround a three dimensional environment. Objects are placed\nwithin the game world and players gain points for each\nobject that is collected. The game clients connect to a game\nserver which allows many players to coexist in the same\ngame world. The basic functionality of this game is\nrepresentative of current online first person shooter and role playing\ngames. The game uses the DirectX 8 graphics API and\nDirectPlay networking API. In this section we will discuss the\nthree different versions of the game that we experimented\nwith.\nFigure 3: The RPGQuest Game.\nThe first version of the game, which is the original\nimplementation of RPGQuest, was created with a completely\nauthoritative server and a non-authoritative client. Authority\ngiven to the server includes decisions of when a player\ncollides with static objects and other players and when a player\npicks up an object. This version of the game performs well\nup to 100ms round-trip latency between the client and the\nserver. There is little lag between the time player hits a\nwall and the time the server corrects the player\"s position.\nHowever, as more latency is induced between the client and\nserver, the game becomes increasingly difficult to play. With\nthe increased latency, the messages coming from the server\ncorrecting the player when it runs into a wall are not\nreceived fast enough. This causes the player to pass through\nthe wall for the period that it is waiting for the server to\nresolve the collision.\nWhen studying the source code of the original version of\nthe RPGQuest game, there is a substantial delay that is\nunavoidable each time an action must be validated by the\nserver. Whenever a movement update is sent to the server,\nthe client must then wait whatever the round trip delay is,\nplus some processing time at the server in order to receive\nits validated or corrected position. This is obviously\nunacceptable in any game where movement or any other rapidly\nchanging state information must be validated and\ndisseminated to the other clients rapidly.\nIn order to get around this problem, we developed a second\nversion of the game, which gives all authority to the client.\nThe client was delegated the authority to validate its own\nmovement and the authority to pick up objects without\nvalidation from the server. In this version of the game when\na player moves around the game space, the client validates\nthat the player\"s new position does not intersect with any\nwalls or static objects. A position update is then sent to the\nserver which then immediately forwards the update to the\nother clients within the region. The update does not have\nto go through any extra processing or validation.\nThis game model of complete authority given to the client\nis beneficial with respect to movement. When latencies of\n6 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\n100ms and up are induced into the link between the client\nand server, the game is still playable since time critical\naspects of the game like movement do not have to wait on a\nreply from the server. When a player hits a wall, the\ncollision is processed locally and does not have to wait on the\nserver to resolve the collision.\nAlthough game playing experience with respect to\nresponsiveness is improved when the authority for movement is\ngiven to the client, there are still aspects of games that do\nnot benefit from this approach. The most important of these\nis consistency. Although actions such as movement are time\ncritical, other actions are not as time critical, but instead\nrequire consistency among the player states. An example of\na game aspect that requires consistency is picking up objects\nthat should only be possessed by a single player.\nIn our client authoritative version of RPGQuest clients send\ntheir own updates to all other players whenever they pick up\nan object. From our tests we have realized this is a problem\nbecause when there is a realistic amount of latency between\nthe client and server, it is possible for two players to pick\nup the same object at the same time. When two players\nattempt to pick up an object at physical times which are\nclose to each other, the update sent by the player who picked\nup the object first will not reach the second player in time\nfor it to see that the object has already been claimed. The\ntwo players will now both think that they own the object.\nThis is why a server is still needed to be authoritative in this\nsituation and maintain consistency throughout the players.\nThese two versions of the RPGQuest game has showed us\nwhy it is necessary to mix the two absolute models of\nauthority. It is better to place authority on the client for quickly\nchanging actions such as movement. It is not desirable to\nhave to wait for server validation on a movement that could\nchange before the reply is even received. It is also sometimes\nnecessary to place consistency over efficiency in aspects of\nthe game that cannot tolerate any inconsistencies such as\nobject ownership. We believe that as the interactivity of\ngames increases, our architecture of mixed authority that\ndoes not rely on server validation will be necessary.\nTo test the benefits and show the feasibility of our\narchitecture of mixed authority, we developed a third version of\nthe RPGQuest game that distributed authority for\ndifferent actions between the client and server. In this version,\nin the interest of consistency, the server remained\nauthoritative for deciding who picked up an object. The client\nwas given full authority to send positional updates to other\nclients and verify its own position without the need to\nverify its updates with the server. When the player tries to\nmove their avatar, the client verifies that the move will not\ncause it to move through a wall. A positional update is then\nsent to the server which then simply forwards it to the other\nclients within the region. This eliminates any extra\nprocessing delay that would occur at the server and is also a more\naccurate means of verification since the client has a more\naccurate view of its own state than the server.\nThis version of the RPGQuest game where authority is\ndistributed between the client and server is an improvement\nfrom the server authoritative version. The client has no\ndelay in waiting for an update for its own position and other\nclients do not have to wait on the server to verify the update.\nThe inconsistencies where two clients can pick up the same\nobject in the client authoritative architecture are not present\nin this version of the client. However, the benefits of mixed\nauthority will not truly be seen until an implementation of\nour communication proxy is integrated into the game. With\nthe addition of the communication proxy, after the client\nverifies its own positional updates it will be able to send the\nupdate to all clients within its region through a low latency\nlink instead of having to first go through the game server\nwhich could possibly be in a very remote location.\nThe coding of the different versions of the game was very\nsimple. The complexity of the client increased very slightly\nin the client authoritative and hybrid models. The\noriginal dumb clients of RPGQuest know the position of other\nplayers; it is not just sent a screen snapshot from the server.\nThe server updates each client with the position of all nearby\nclients. The dumb clients use client side prediction to fill\nin the gaps between the updates they receive. The only\nextra processing the client has to do in the hybrid architecture\nis to compare its current position to the positions of all\nobjects (walls, boxes, etc.) in its area. This obviously means\nthat each client will have to already have downloaded the\nlocations of all static objects within its current region.\n5. RELATED WORK\nIt has been noted that in addition to latency, bandwidth\nrequirements also dictate the type of gaming architecture to\nbe used. In [16], different types of architectures are\nstudied with respect to bandwidth efficiencies and latency. It is\npointed out that Central Server architectures are not\nscalable because of bandwidth requirements at the server but\nthe overhead for consistency checks are limited as they are\nperformed at the server. A Peer-to-Peer architecture, on the\nother hand, is scalable but there is a significant overhead\nfor consistency checks as this is required at every player.\nThe paper proposes a hybrid architecture which is\nPeer-toPeer in terms of message exchange (and thereby is scalable)\nwhere a Central Server is used for off-line consistency checks\n(thereby mitigating consistency check overhead). The paper\nprovides an implementation example of BZFlag which is a\npeer-to-peer game which is modified to transfer all\nauthority to a central server. In essence, this paper advocates an\nauthority architecture which is server based even for\npeerto-peer games, but does not consider division of authority\nbetween a client and a server to minimize latency which\ncould affect game playing experience even with the type of\nlatency found in server based games (where all authority is\nwith the server).\nThere is also previous work that has suggested that proxy\nbased architectures be used to alleviate the latency\nproblem and in addition use proxies to provide congestion\ncontrol and cheat-proof mechanisms in distributed multi-player\ngames [17]. In [18], a proxy server-network architecture is\npresented that is aimed at improving scalability of\nmultiplayer games and lowering latency in server-client data\ntransmission. The main goal of this work is to improve scalability\nof First-Person Shooter (FPS) and RPG games. The further\nobjective is to improve the responsiveness MMOGs by\nproviding low latency communications between the client and\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 7\nserver. The architecture uses interconnected proxy servers\nthat each have a full view of the global game state. Proxy\nservers are located at various different ISPs. It is mentioned\nin this work that dividing the game space among multiple\ngames servers such as the federated model presented in [19]\nis inefficient for a relatively fast game flow and that the\nproposed architecture alleviates this problem because users\ndo not have to connect to a different server whenever they\ncross the server boundary. This architecture still requires all\nproxies to be aware of the overall game state over the whole\ngame space unlike our work where we require the proxies\nto maintain only partial state information about the game\nspace.\nFidelity based agent architectures have been proposed in [20,\n21]. These works propose a distributed client-server\narchitecture for distributed interactive simulations where\ndifferent servers are responsible for different portions of the game\nspace. When an object moves from one portion to another,\nthere is a handoff from one server to another. Although\nthese works propose an architecture where different portions\nof the simulation space are managed by different servers,\nthey do not address the issue of decreasing the bandwidth\nrequired through the use of communication proxies.\nOur work differs from the above discussed previous works by\nproposing a) a distributed proxy-based architecture to\ndecrease bandwidth requirements at the clients and the servers\nwithout requiring the proxies to keep state information about\nthe whole game space, b) a dynamic authority assignment\ntechnique to reduce latency (by performing consistency checks\nlocally at the client whenever possible) by splitting the\nauthority between the clients and servers on a per object basis,\nand c) proposing that cheat detection can be built into the\nproxies if they are provided more information about the\nspecific game instead of using them purely as communication\nproxies (although this idea has not been implemented yet\nand is part of our future work).\n6. CONCLUSIONS AND FUTURE WORK\nIn this paper, we first proposed a proxy-based\narchitecture for MMOGs that enables MMOGs to scale to a large\nnumber of users by mitigating the need for a large\nnumber of transport sessions to be maintained and decreasing\nboth bandwidth overhead and latency of event update.\nSecond, we proposed a mixed authority assignment mechanism\nthat divides authority for making decisions on actions and\nevents within the game between the clients and server and\nargued how such an authority assignment leads to better\ngame playing experience without sacrificing the consistency\nof the game. Third, to validate the viability of the mixed\nauthority assignment mechanism, we implemented it within\na MMOG called RPGQuest and described our\nimplementation experience.\nIn future work, we propose to implement the\ncommunications proxy architecture described in this paper and\nintegrate the mixed authority mechanism within this\narchitecture. We propose to evaluate the benefits of the proxy-based\narchitecture in terms of scalability, accuracy and\nresponsiveness. We also plan to implement a version of the RPGQuest\ngame with dynamic assignment of authority to allow players\nthe authority to pickup objects when no other players are\nnear. As discussed earlier, this will allow for a more efficient\nand responsive game in certain situations and alleviate some\nof the processing load from the server.\nAlso, since so much trust is put into the clients of our\narchitecture, it will be necessary to integrate into the\narchitecture many of the cheat detection schemes that have been\nproposed in the literature. Software such as Punkbuster [22]\nand a reputation system like those proposed by [23] and [15]\nwould be integral to the operation of an architecture such as\nours which has a lot of trust placed on the client. We further\npropose to make the proxies in our architecture more game\ncognizant so that cheat detection mechanisms can be built\ninto the proxies themselves.\n7. REFERENCES\n[1] Y. W. Bernier. Latency Compensation Methods in\nClient/Server In-game Protocol Design and\nOptimization. In Proc. of Game Developers\nConference\"01, 2001.\n[2] Lothar Pantel and Lars C. Wolf. On the impact of\ndelay on real-time multiplayer games. In NOSSDAV\n\"02: Proceedings of the 12th international workshop on\nNetwork and operating systems support for digital\naudio and video, pages 23-29, New York, NY, USA,\n2002. ACM Press.\n[3] G. Armitage. Sensitivity of Quake3 Players to Network\nLatency. In Proc. of IMW2001, Workshop Poster\nSession, November 2001. http://www.geocities.com/\ngj armitage/q3/quake-results.html.\n[4] Tobias Fritsch, Hartmut Ritter, and Jochen Schiller.\nThe effect of latency and network limitations on\nmmorpgs: a field study of everquest2. In NetGames\n\"05: Proceedings of 4th ACM SIGCOMM workshop on\nNetwork and system support for games, pages 1-9,\nNew York, NY, USA, 2005. ACM Press.\n[5] Tom Beigbeder, Rory Coughlan, Corey Lusher, John\nPlunkett, Emmanuel Agu, and Mark Claypool. The\neffects of loss and latency on user performance in\nunreal tournament 2003. In NetGames \"04:\nProceedings of 3rd ACM SIGCOMM workshop on\nNetwork and system support for games, pages\n144-151, New York, NY, USA, 2004. ACM Press.\n[6] Y. Lin, K. Guo, and S. Paul. Sync-MS: Synchronized\nMessaging Service for Real-Time Multi-Player\nDistributed Games. In Proc. of 10th IEEE\nInternational Conference on Network Protocols\n(ICNP), Nov 2002.\n[7] Katherine Guo, Sarit Mukherjee, Sampath\nRangarajan, and Sanjoy Paul. A fair message\nexchange framework for distributed multi-player\ngames. In NetGames \"03: Proceedings of the 2nd\nworkshop on Network and system support for games,\npages 29-41, New York, NY, USA, 2003. ACM Press.\n[8] T. Barron. Multiplayer Game Programming, chapter\n16-17, pages 672-731. Prima Tech\"s Game\nDevelopment Series. Prima Publishing, 2001.\n8 The 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006\n[9] Carsten Griwodz and P\u02daal Halvorsen. The fun of using\ntcp for an mmorpg. In NOSSDAV \"06: Proceedings of\nthe International Workshop on Network and Operating\nSystems Support for Digital Audio and VIdeo, New\nYork, NY, USA, 2006. ACM Press.\n[10] Sudhir Aggarwal, Hemant Banavar, Amit Khandelwal,\nSarit Mukherjee, and Sampath Rangarajan. Accuracy\nin dead-reckoning based distributed multi-player\ngames. In NetGames \"04: Proceedings of 3rd ACM\nSIGCOMM workshop on Network and system support\nfor games, pages 161-165, New York, NY, USA, 2004.\nACM Press.\n[11] Sudhir Aggarwal, Hemant Banavar, Sarit Mukherjee,\nand Sampath Rangarajan. Fairness in dead-reckoning\nbased distributed multi-player games. In NetGames\n\"05: Proceedings of 4th ACM SIGCOMM workshop on\nNetwork and system support for games, pages 1-10,\nNew York, NY, USA, 2005. ACM Press.\n[12] Riker, T. et al. Bzflag. http://www.bzflag.org,\n2000-2006.\n[13] Linden Lab. Second life. http://secondlife.com,\n2003.\n[14] Martin Mauve. How to keep a dead man from\nshooting. In IDMS \"00: Proceedings of the 7th\nInternational Workshop on Interactive Distributed\nMultimedia Systems and Telecommunication Services,\npages 199-204, London, UK, 2000. Springer-Verlag.\n[15] Max Skibinsky. Massively Multiplayer Game\nDevelopment 2, chapter The Quest for Holy\nScalePart 2: P2P Continuum, pages 355-373. Charles River\nMedia, 2005.\n[16] Joseph D. Pellegrino and Constantinos Dovrolis.\nBandwidth requirement and state consistency in three\nmultiplayer game architectures. In NetGames \"03:\nProceedings of the 2nd workshop on Network and\nsystem support for games, pages 52-59, New York,\nNY, USA, 2003. ACM Press.\n[17] M. Mauve J. Widmer and S. Fischer. A Generic Proxy\nSystems for Networked Computer Games. In Proc. of\nthe Workshop on Network Games, Netgames 2002,\nApril 2002.\n[18] S. Gorlatch J. Muller, S. Fischer and M.Mauve. A\nProxy Server Network Architecture for Real-Time\nComputer Games. In Euor-Par 2004 Parallel\nProcessing: 10th International EURO-PAR\nConference, August-September 2004.\n[19] H. Hazeyama T. Limura and Y. Kadobayashi. Zoned\nFederation of Game Servers: A Peer-to-Peer Approach\nto Scalable Multiplayer On-line Games. In Proc. of\nACM Workshop on Network Games, Netgames 2004,\nAugust-September 2004.\n[20] B. Kelly and S. Aggarwal. A Framework for a Fidelity\nBased Agent Architecture for Distributed Interactive\nSimulation. In Proc. 14th Workshop on Standards for\nDistributed Interactive Simulation, pages 541-546,\nMarch 1996.\n[21] S. Aggarwal and B. Kelly. Hierarchical Structuring for\nDistributed Interactive Simulation. In Proc. 13th\nWorkshop on Standards for Distributed Interactive\nSimulation, pages 125-132, Sept 1995.\n[22] Even Balance, Inc. Punkbuster.\nhttp://www.evenbalance.com/, 2001-2006.\n[23] Y. Wang and J. Vassileva. Trust and Reputation\nModel in Peer-to-Peer Networks. In Third\nInternational Conference on Peer-to-Peer Computing,\n2003.\nThe 5th Workshop on Network & System Support for Games 2006 - NETGAMES 2006 9", "keywords": "cheat-proof mechanism;latency compensation;role playing game;artificial latency;mmog;central-server architecture;assignment of authority;authority;proxy-based game architecture;first person shooter;communication proxy;distribute multi-player game;client authoritative approach;authority assignment;multi-player online game"} {"name": "train_C-62", "title": "Network Monitors and Contracting Systems: Competition and Innovation", "abstract": "Today\"s Internet industry suffers from several well-known pathologies, but none is as destructive in the long term as its resistance to evolution. Rather than introducing new services, ISPs are presently moving towards greater commoditization. It is apparent that the network\"s primitive system of contracts does not align incentives properly. In this study, we identify the network\"s lack of accountability as a fundamental obstacle to correcting this problem: Employing an economic model, we argue that optimal routes and innovation are impossible unless new monitoring capability is introduced and incorporated with the contracting system. Furthermore, we derive the minimum requirements a monitoring system must meet to support first-best routing and innovation characteristics. Our work does not constitute a new protocol; rather, we provide practical and specific guidance for the design of monitoring systems, as well as a theoretical framework to explore the factors that influence innovation.", "fulltext": "1. INTRODUCTION\nMany studies before us have noted the Internet\"s resistance to new\nservices and evolution. In recent decades, numerous ideas have\nbeen developed in universities, implemented in code, and even\nwritten into the routers and end systems of the network, only to\nlanguish as network operators fail to turn them on on a large scale.\nThe list includes Multicast, IPv6, IntServ, and DiffServ. Lacking\nthe incentives just to activate services, there seems to be little hope\nof ISPs devoting adequate resources to developing new ideas. In the\nlong term, this pathology stands out as a critical obstacle to the\nnetwork\"s continued success (Ratnasamy, Shenker, and McCanne\nprovide extensive discussion in [11]).\nOn a smaller time scale, ISPs shun new services in favor of cost\ncutting measures. Thus, the network has characteristics of a\ncommodity market. Although in theory, ISPs have a plethora of\nrouting policies at their disposal, the prevailing strategy is to route in\nthe cheapest way possible [2]. On one hand, this leads directly to\nsuboptimal routing. More importantly, commoditization in the short\nterm is surely related to the lack of innovation in the long term.\nWhen the routing decisions of others ignore quality characteristics,\nISPs are motivated only to lower costs. There is simply no reward\nfor introducing new services or investing in quality improvements.\nIn response to these pathologies and others, researchers have put\nforth various proposals for improving the situation. These can be\ndivided according to three high-level strategies: The first attempts\nto improve the status quo by empowering end-users. Clark, et al.,\nsuggest that giving end-users control over routing would lead to\ngreater service diversity, recognizing that some payment mechanism\nmust also be provided [5]. Ratnasamy, Shenker, and McCanne\npostulate a link between network evolution and user-directed\nrouting [11]. They propose a system of Anycast to give end-users\nthe ability to tunnel their packets to an ISP that introduces a\ndesirable protocol. The extra traffic to the ISP, the authors suggest,\nwill motivate the initial investment.\nThe second strategy suggests a revision of the contracting system.\nThis is exemplified by MacKie-Mason and Varian, who propose a\nsmart market to control access to network resources [10]. Prices\nare set to the market-clearing level based on bids that users associate\nto their traffic. In another direction, Afergan and Wroclawski\nsuggest that prices should be explicitly encoded in the routing\nprotocols [2]. They argue that such a move would improve stability\nand align incentives.\nThe third high-level strategy calls for greater network\naccountability. In this vein, Argyraki, et al., propose a system of\npacket obituaries to provide feedback as to which ISPs drop packets\n[3]. They argue that such feedback would help reveal which ISPs\nwere adequately meeting their contractual obligations. Unlike the\nfirst two strategies, we are not aware of any previous studies that\nhave connected accountability with the pathologies of\ncommoditization or lack of innovation.\nIt is clear that these three strategies are closely linked to each other\n(for example, [2], [5], and [9] each argue that giving end-users\nrouting control within the current contracting system is\nproblematic). Until today, however, the relationship between them\nhas been poorly understood. There is currently little theoretical\nfoundation to compare the relative merits of each proposal, and a\nparticular lack of evidence linking accountability with innovation\nand service differentiation. This paper will address both issues.\nWe will begin by introducing an economic network model that\nrelates accountability, contracts, competition, and innovation. Our\nmodel is highly stylized and may be considered preliminary: it is\nbased on a single source sending data to a single destination.\nNevertheless, the structure is rich enough to expose previously\nunseen features of network behavior. We will use our model for\ntwo main purposes:\nFirst, we will use our model to argue that the lack of accountability\nin today\"s network is a fundamental obstacle to overcoming the\npathologies of commoditization and lack of innovation. In other\nwords, unless new monitoring capabilities are introduced, and\nintegrated with the system of contracts, the network cannot achieve\noptimal routing and innovation characteristics. This result provides\nmotivation for the remainder of the paper, in which we explore how\naccountability can be leveraged to overcome these pathologies and\ncreate a sustainable industry. We will approach this problem from a\nclean-slate perspective, deriving the level of accountability needed\nto sustain an ideal competitive structure.\nWhen we say that today\"s Internet has poor accountability, we mean\nthat it reveals little information about the behavior - or misbehavior\n- of ISPs. This well-known trait is largely rooted in the network\"s\nhistory. In describing the design philosophy behind the Internet\nprotocols, Clark lists accountability as the least important among\nseven second level goals. [4] Accordingly, accountability\nreceived little attention during the network\"s formative years. Clark\nrelates this to the network\"s military context, and finds that had the\nnetwork been designed for commercial development, accountability\nwould have been a top priority.\nArgyraki, et al., conjecture that applying the principles of layering\nand transparency may have led to the network\"s lack of\naccountability [3]. According to these principles, end hosts should\nbe informed of network problems only to the extent that they are\nrequired to adapt. They notice when packet drops occur so that they\ncan perform congestion control and retransmit packets. Details of\nwhere and why drops occur are deliberately concealed.\nThe network\"s lack of accountability is highly relevant to a\ndiscussion of innovation because it constrains the system of\ncontracts. This is because contracts depend upon external\ninstitutions to function - the judge in the language of incomplete\ncontract theory, or simply the legal system. Ultimately, if a judge\ncannot verify that some condition holds, she cannot enforce a\ncontract based on that condition. Of course, the vast majority of\ncontracts never end up in court. Especially when a judge\"s ruling is\neasily predicted, the parties will typically comply with the contract\nterms on their own volition. This would not be possible, however,\nwithout the judge acting as a last resort.\nAn institution to support contracts is typically complex, but we\nabstract it as follows: We imagine that a contract is an algorithm\nthat outputs a payment transfer among a set of ISPs (the parties) at\nevery time. This payment is a function of the past and present\nbehaviors of the participants, but only those that are verifiable.\nHence, we imagine that a contract only accepts proofs as inputs.\nWe will call any process that generates these proofs a contractible\nmonitor. Such a monitor includes metering or sensing devices on\nthe physical network, but it is a more general concept. Constructing\na proof of a particular behavior may require readings from various\ndevices distributed among many ISPs. The contractible monitor\nincludes whatever distributed algorithmic mechanism is used to\nmotivate ISPs to share this private information.\nFigure 1 demonstrates how our model of contracts fits together. We\nmake the assumption that all payments are mediated by contracts.\nThis means that without contractible monitors that attest to, say,\nlatency, payments cannot be conditioned on latency.\nFigure 1: Relationship between monitors and contracts\nWith this model, we may conclude that the level of accountability in\ntoday\"s Internet only permits best effort contracts. Nodes cannot\ncondition payments on either quality or path characteristics.\nIs there anything wrong with best-effort contracts? The reader\nmight wonder why the Internet needs contracts at all. After all, in\nnon-network industries, traditional firms invest in research and\ndifferentiate their products, all in the hopes of keeping their\ncustomers and securing new ones. One might believe that such\nmarket forces apply to ISPs as well. We may adopt this as our null\nhypothesis:\nNull hypothesis: Market forces are sufficient to maintain service\ndiversity and innovation on a network, at least to the same extent\nas they do in traditional markets.\nThere is a popular intuitive argument that supports this hypothesis,\nand it may be summarized as follows:\nIntuitive argument supporting null hypothesis:\n1. Access providers try to increase their quality to get more\nconsumers.\n2. Access providers are themselves customers for second hop\nISPs, and the second hops will therefore try to provide\nhighquality service in order to secure traffic from access\nproviders. Access providers try to select high quality transit\nbecause that increases their quality.\n3. The process continues through the network, giving every\nISP a competitive reason to increase quality.\nWe are careful to model our network in continuous time, in order to\ncapture the essence of this argument. We can, for example, specify\nequilibria in which nodes switch to a new next hop in the event of a\nquality drop.\nMoreover, our model allows us to explore any theoretically possible\npunishments against cheaters, including those that are costly for\nend-users to administer. By contrast, customers in the real world\nrarely respond collectively, and often simply seek the best deal\ncurrently offered. These constraints limit their ability to punish\ncheaters.\nEven with these liberal assumptions, however, we find that we must\nreject our null hypothesis. Our model will demonstrate that\nidentifying a cheating ISP is difficult under low accountability,\nlimiting the threat of market driven punishment. We will define an\nindex of commoditization and show that it increases without bound\nas data paths grow long. Furthermore, we will demonstrate a\nframework in which an ISP\"s maximum research investment\ndecreases hyperbolically with its distance from the end-user.\nNetwork\nBehavior\nMonitor Contract\nProof\nPayments\n184\nTo summarize, we argue that the Internet\"s lack of accountability\nmust be addressed before the pathologies of commoditization and\nlack of innovation can be resolved. This leads us to our next topic:\nHow can we leverage accountability to overcome these pathologies?\nWe approach this question from a clean-slate perspective. Instead\nof focusing on incremental improvements, we try to imagine how an\nideal industry would behave, then derive the level of accountability\nneeded to meet that objective. According to this approach, we first\ncraft a new equilibrium concept appropriate for network\ncompetition. Our concept includes the following requirements:\nFirst, we require that punishing ISPs that cheat is done without\nrerouting the path. Rerouting is likely to prompt end-users to switch\nproviders, punishing access providers who administer punishments\ncorrectly. Next, we require that the equilibrium cannot be\nthreatened by a coalition of ISPs that exchanges illicit side\npayments. Finally, we require that the punishment mechanism that\nenforces contracts does not punish innocent nodes that are not in the\ncoalition.\nThe last requirement is somewhat unconventional from an economic\nperspective, but we maintain that it is crucial for any reasonable\nsolution. Although ISPs provide complementary services when they\nform a data path together, they are likely to be horizontal\ncompetitors as well. If innocent nodes may be punished, an ISP\nmay decide to deliberately cheat and draw punishment onto itself\nand its neighbors. By cheating, the ISP may save resources, thereby\nensuring that the punishment is more damaging to the other ISPs,\nwhich probably compete with the cheater directly for some\ncustomers. In the extreme case, the cheater may force the other\nISPs out of business, thereby gaining a monopoly on some routes.\nApplying this equilibrium concept, we derive the monitors needed\nto maintain innovation and optimize routes. The solution is\nsurprisingly simple: contractible monitors must report the quality of\nthe rest of the path, from each ISP to the destination. It turns out\nthat this is the correct minimum accountability requirement, as\nopposed to either end-to-end monitors or hop-by-hop monitors, as\none might initially suspect.\nRest of path monitors can be implemented in various ways. They\nmay be purely local algorithms that listen for packet echoes.\nAlternately, they can be distributed in nature. We describe a way to\nconstruct a rest of path monitor out of monitors for individual ISP\nquality and for the data path. This requires a mechanism to\nmotivate ISPs to share their monitor outputs with each other. The\nrest of path monitor then includes the component monitors and the\ndistributed algorithmic mechanism that ensures that information is\nshared as required. This example shows that other types of monitors\nmay be useful as building blocks, but must be combined to form rest\nof path monitors in order to achieve ideal innovation characteristics.\nOur study has several practical implications for future protocol\ndesign. We show that new monitors must be implemented and\nintegrated with the contracting system before the pathologies of\ncommoditization and lack of innovation can be overcome.\nMoreover, we derive exactly what monitors are needed to optimize\nroutes and support innovation. In addition, our results provide\nuseful input for clean-slate architectural design, and we use several\nnovel techniques that we expect will be applicable to a variety of\nfuture research.\nThe rest of this paper is organized as follows: In section 2, we lay\nout our basic network model. In section 3, we present a\nlowaccountability network, modeled after today\"s Internet. We\ndemonstrate how poor monitoring causes commoditization and a\nlack of innovation. In section 4, we present verifiable monitors, and\nshow that proofs, even without contracts, can improve the status\nquo. In section 5, we turn our attention to contractible monitors.\nWe show that rest of path monitors can support competition games\nwith optimal routing and innovation. We further show that rest of\npath monitors are required to support such competition games. We\ncontinue by discussing how such monitors may be constructed using\nother monitors as building blocks. In section 6, we conclude and\npresent several directions for future research.\n2. BASIC NETWORK MODEL\nA source, S, wants to send data to destination, D. S and D are nodes\non a directed, acyclic graph, with a finite set of intermediate nodes,\n{ }NV ,...2,1= , representing ISPs. All paths lead to D, and every\nnode not connected to D has at least two choices for next hop.\nWe will represent quality by a finite dimensional vector space, Q,\ncalled the quality space. Each dimension represents a distinct\nnetwork characteristic that end-users care about. For example,\nlatency, loss probability, jitter, and IP version can each be assigned\nto a dimension.\nTo each node, i, we associate a vector in the quality space, Qqi \u2208 .\nThis corresponds to the quality a user would experience if i were the\nonly ISP on the data path. Let N\nQ\u2208q be the vector of all node\nqualities.\nOf course, when data passes through multiple nodes, their qualities\ncombine in some way to yield a path quality. We represent this by\nan associative binary operation, *: QQQ \u2192\u00d7 . For path\n( )nvvv ,...,, 21 , the quality is given by nvvv qqq \u2217\u2217\u2217 ...21\n. The *\noperation reflects the characteristics of each dimension of quality.\nFor example, * can act as an addition in the case of latency,\nmultiplication in the case of loss probability, or a\nminimumargument function in the case of security.\nWhen data flows along a complete path from S to D, the source and\ndestination, generally regarded as a single player, enjoy utility given\nby a function of the path quality, \u2192Qu : . Each node along the\npath, i, experiences some cost of transmission, ci.\n2.1 Game Dynamics\nUltimately, we are most interested in policies that promote\ninnovation on the network. In this study, we will use innovation in\na fairly general sense. Innovation describes any investment by an\nISP that alters its quality vector so that at least one potential data\npath offers higher utility. This includes researching a new routing\nalgorithm that decreases the amount of jitter users experience. It\nalso includes deploying a new protocol that supports quality of\nservice. Even more broadly, buying new equipment to decrease\nS D\n185\nlatency may also be regarded as innovation. Innovation\nmay be thought of as the micro-level process by which\nthe network evolves.\nOur analysis is limited in one crucial respect: We focus\non inventions that a single ISP can implement to improve\nthe end-user experience. This excludes technologies that\nrequire adoption by all ISPs on the network to function.\nBecause such technologies do not create a competitive\nadvantage, rewarding them is difficult and may require\nintellectual property or some other market distortion. We\ndefer this interesting topic to future work.\nAt first, it may seem unclear how a large-scale distributed process\nsuch as innovation can be influenced by mechanical details like\nnetworks monitors. Our model must draw this connection in a\nrealistic fashion.\nThe rate of innovation depends on the profits that potential\ninnovators expect in the future. The reward generated by an\ninvention must exceed the total cost to develop it, or the inventor\nwill not rationally invest. This reward, in turn, is governed by the\ncompetitive environment in which the firm operates, including the\nprocess by which firms select prices, and agree upon contracts with\neach other. Of course, these decisions depend on how routes are\nestablished, and how contracts determine actual monetary\nexchanges.\nAny model of network innovation must therefore relate at least three\ndistinct processes: innovation, competition, and routing. We select\na game dynamics that makes the relation between these processes as\nexplicit as possible. This is represented schematically in Figure 2.\nThe innovation stage occurs first, at time 2\u2212=t . In this stage, each\nagent decides whether or not to make research investments. If she\nchooses not to, her quality remains fixed. If she makes an\ninvestment, her quality may change in some way. It is not\nnecessary for us to specify how such changes take place. The\nagents\" choices in this stage determine the vector of qualities, q,\ncommon knowledge for the rest of the game.\nNext, at time 1\u2212=t , agents participate in the competition stage, in\nwhich contracts are agreed upon. In today\"s industry, these\ncontracts include prices for transit access, and peering agreements.\nSince access is provided on a best-effort basis, a transit agreement\ncan simply be represented by its price. Other contracting systems\nwe will explore will require more detail.\nFinally, beginning at 0=t , firms participate in the routing stage.\nOther research has already employed repeated games to study\nrouting, for example [1], [12]. Repetition reveals interesting effects\nnot visible in a single stage game, such as informal collusion to\nelevate prices in [12]. We use a game in continuous time in order to\nstudy such properties. For example, we will later ask whether a\nplayer will maintain higher quality than her contracts require, in the\nhope of keeping her customer base or attracting future customers.\nOur dynamics reflect the fact that ISPs make innovation decisions\ninfrequently. Although real firms have multiple opportunities to\ninnovate, each opportunity is followed by a substantial length of\ntime in which qualities are fixed. The decision to invest focuses on\nhow the firm\"s new quality will improve the contracts it can enter\ninto. Hence, our model places innovation at the earliest stage,\nattempting to capture a single investment decision. Contracting\ndecisions are made on an intermediate time scale, thus appearing\nnext in the dynamics. Routing decisions are made very frequently,\nmainly to maximize immediate profit flows, so they appear in the\nlast stage.\nBecause of this ordering, our model does not allow firms to route\nstrategically to affect future innovation or contracting decisions. In\nopposition, Afergan and Wroclawski argue that contracts are formed\nin response to current traffic patterns, in a feedback loop [2].\nAlthough we are sympathetic to their observation, such an addition\nwould make our analysis intractable. Our model is most realistic\nwhen contracting decisions are infrequent.\nThroughout this paper, our solution concept will be a subgame\nperfect equilibrium (SPE). An SPE is a strategy point that is a Nash\nequilibrium when restricted to each subgame. Three important\nsubgames have been labeled in Figure 2. The innovation game\nincludes all three stages. The competition game includes only the\ncompetition stage and the routing stage. The routing game includes\nonly the routing stage.\nAn SPE guarantees that players are forward-looking. This means,\nfor example, that in the competition stage, firms must act rationally,\nmaximizing their expected profits in the routing stage. They cannot\ncarry out threats they made in the innovation stage if it lowers their\nexpected payoff.\nOur schematic already suggests that the routing game is crucial for\npromoting innovation. To support innovation, the competition\ngame must somehow reward ISPs with high quality. But that\nmeans that the routing game must tend to route to nodes with high\nquality. If the routing game always selects the lowest-cost routes,\nfor example, innovation will not be supported. We will support this\nobservation with analysis later.\n2.2 The Routing Game\nThe routing game proceeds in continuous time, with all players\ndiscounting by a common factor, r. The outputs from previous\nstages, q and the set of contracts, are treated as exogenous\nparameters for this game. For each time 0\u2265t , each node must\nselect a next hop to route data to. Data flows across the resultant\npath, causing utility flow to S and D, and a flow cost to the nodes on\nthe path, as described above. Payment flows are also created, based\non the contracts in place.\nRelating our game to the familiar repeated prisoners\" dilemma,\nimagine that we are trying to impose a high quality, but costly path.\nAs we argued loosely above, such paths must be sustainable in order\nto support innovation. Each ISP on the path tries to maximize her\nown payment, net of costs, so she may not want to cooperate with\nour plan. Rather, if she can find a way to save on costs, at the\nexpense of the high quality we desire, she will be tempted to do so.\nInnovation Game Competition Game Routing Game\nInnovation\nstage\nCompetition\nstage\nRouting\nstageQualities\n(q)\nContracts\n(prices)\nProfits\nt = -2 t = -1 t \u2208 [ 0 , )\nFigure 2: Game Dynamics\n186\nAnalogously to the prisoners\" dilemma, we will call such a decision\ncheating. A little more formally,\nCheating refers to any action that an ISP can take, contrary to\nsome target strategy point that we are trying to impose, that\nenhances her immediate payoff, but compromises the quality of\nthe data path.\nOne type of cheating relates to the data path. Each node on the path\nhas to pay the next node to deliver its traffic. If the next node offers\nhigh quality transit, we may expect that a lower quality node will\noffer a lower price. Each node on the path will be tempted to route\nto a cheaper next hop, increasing her immediate profits, but\nlowering the path quality. We will call this type of action cheating\nin route.\nAnother possibility we can model, is that a node finds a way to save\non its internal forwarding costs, at the expense of its own quality.\nWe will call this cheating internally to distinguish it from cheating\nin route. For example, a node might drop packets beyond the rate\nrequired for congestion control, in order to throttle back TCP flows\nand thus save on forwarding costs [3]. Alternately, a node\nemploying quality of service could give high priority packets a\nlower class of service, thus saving on resources and perhaps\nallowing itself to sell more high priority service.\nIf either cheating in route or cheating internally is profitable, the\nspecified path will not be an equilibrium. We assume that cheating\ncan never be caught instantaneously. Rather, a cheater can always\nenjoy the payoff from cheating for some positive time, which we\nlabel 0t . This includes the time for other players to detect and react\nto the cheating. If the cheater has a contract which includes a\ncustomer lock-in period, 0t also includes the time until customers\nare allowed to switch to a new ISP. As we will see later, it is\nsocially beneficial to decrease 0t , so such lock-in is detrimental to\nwelfare.\n3. PATHOLOGIES OF A\nLOWACCOUNTABILITY NETWORK\nIn order to motivate an exploration of monitoring systems, we begin\nin this section by considering a network with a poor degree of\naccountability, modeled after today\"s Internet. We will show how\nthe lack of monitoring necessarily leads to poor routing and\ndiminishes the rate of innovation. Thus, the network\"s lack of\naccountability is a fundamental obstacle to resolving these\npathologies.\n3.1 Accountability in the Current Internet\nFirst, we reflect on what accountability characteristics the present\nInternet has. Argyraki, et al., point out that end hosts are given\nminimal information about packet drops [3]. Users know when\ndrops occur, but not where they occur, nor why. Dropped packets\nmay represent the innocent signaling of congestion, or, as we\nmentioned above, they may be a form of cheating internally. The\nproblem is similar for other dimensions of quality, or in fact more\nacute. Finding an ISP that gives high priority packets a lower class\nof service, for example, is further complicated by the lack of even\nbasic diagnostic tools.\nIn fact, it is similarly difficult to identify an ISP that cheats in route.\nHuston notes that Internet traffic flows do not always correspond to\nrouting information [8]. An ISP may hand a packet off to a\nneighbor regardless of what routes that neighbor has advertised.\nFurthermore, blocks of addresses are summarized together for\ndistant hosts, so a destination may not even be resolvable until\npackets are forwarded closer.\nOne might argue that diagnostic tools like ping and traceroute can\nidentify cheaters. Unfortunately, Argyraki, et al., explain that these\ntools only reveal whether probe packets are echoed, not the fate of\npast packets [3]. Thus, for example, they are ineffective in detecting\nlow-frequency packet drops. Even more fundamentally, a\nsophisticated cheater can always spot diagnostic packets and give\nthem special treatment.\nAs a further complication, a cheater may assume different aliases\nfor diagnostic packets arriving over different routes. As we will see\nbelow, this gives the cheater a significant advantage in escaping\npunishment for bad behavior, even if the data path is otherwise\nobservable.\n3.2 Modeling Low-Accountability\nAs the above evidence suggests, the current industry allows for very\nlittle insight into the behavior of the network. In this section, we\nattempt to capture this lack of accountability in our model. We\nbegin by defining a monitor, our model of the way that players\nreceive external information about network behavior,\nA monitor is any distributed algorithmic mechanism that runs on\nthe network graph, and outputs, to specific nodes, informational\nstatements about current or past network behavior.\nWe assume that all external information about network behavior is\nmediated in this way. The accountability properties of the Internet\ncan be represented by the following monitors:\nE2E (End to End): A monitor that informs S/D about what the\ntotal path quality is at any time (this is the quality they\nexperience).\nROP (Rest of Path): A monitor that informs each node along the\ndata path what the quality is for the rest of the path to the\ndestination.\nPRc (Packets Received): A monitor that tells nodes how much\ndata they accept from each other, so that they can charge by\nvolume. It is important to note, however, that this information is\naggregated over many source-destination pairs. Hence, for the\nsake of realism, it cannot be used to monitor what the data path is.\nPlayers cannot measure the qualities of other, single nodes, just the\nrest of the path. Nodes cannot see the path past the next hop. This\nlast assumption is stricter than needed for our results. The critical\ningredient is that nodes cannot verify that the path avoids a specific\nhop. This holds, for example, if the path is generally visible, except\nnodes can use different aliases for different parents. Similar results\nalso hold if alternate paths always converge after some integer\nnumber, m, of hops.\nIt is important to stress that E2E and ROP are not the contractible\nmonitors we described in the introduction - they do not generate\nproofs. Thus, even though a player observes certain information,\nshe generally cannot credibly share it with another player. For\nexample, if a node after the first hop starts cheating, the first hop\nwill detect the sudden drop in quality for the rest of the path, but the\nfirst hop cannot make the source believe this observation - the\n187\nsource will suspect that the first hop was the cheater, and fabricated\nthe claim against the rest of the path.\nTypically, E2E and ROP are envisioned as algorithms that run on a\nsingle node, and listen for packet echoes. This is not the only way\nthat they could be implemented, however; an alternate strategy is to\naggregate quality measurements from multiple points in the\nnetwork. These measurements can originate in other monitors,\nlocated at various ISPs. The monitor then includes the component\nmonitors as well as whatever mechanisms are in place to motivate\nnodes to share information honestly as needed. For example, if the\nsource has monitors that reveal the qualities of individual nodes,\nthey could be combined with path information to create an ROP\nmonitor.\nSince we know that contracts only accept proofs as input, we can\ninfer that payments in this environment can only depend on the\nnumber of packets exchanged between players. In other words,\ncontracts are best-effort. For the remainder of this section, we will\nassume that contracts are also linear - there is a constant payment\nflow so long as a node accepts data, and all conditions of the\ncontract are met. Other, more complicated tariffs are also possible,\nand are typically used to generate lock-in. We believe that our\nparameter t0 is sufficient to describe lock-in effects, and we believe\nthat the insights in this section apply equally to any tariffs that are\nbounded so that the routing game remains continuous at infinity.\nRestricting attention to linear contracts allows us to represent some\nnode i\"s contract by its price, pi.\nBecause we further know that nodes cannot observe the path after\nthe next hop, we can infer that contracts exist only between\nneighboring nodes on the graph. We will call this arrangement of\ncontracts bilateral. When a competition game exclusively uses\nbilateral contracts, we will call it a bilateral contract competition\ngame.\nWe first focus on the routing game and ask whether a high quality\nroute can be maintained, even when a low quality route is cheaper.\nRecall that this is a requirement in order for nodes to have any\nincentive to innovate. If nodes tend to route to low price next hops,\nregardless of quality, we say that the network is commoditized. To\nmeasure this tendency, we define an index of commoditization as\nfollows:\nFor a node on the data path, i, define its quality premium,\nminppd ji \u2212= , where pj is the flow payment to the next hop in\nequilibrium, and pmin is the price of the lowest cost next hop.\nDefinition: The index of commoditization, CI , is the average,\nover each node on the data path, i, of i\"s flow profit as a fraction\nof i\"s quality premium, ( ) ijii dpcp /\u2212\u2212 .\nCI ranges from 0, when each node spends all of its potential profit\non its quality premium, to infinite, when a node absorbs positive\nprofit, but uses the lowest price next hop. A high value for CI\nimplies that nodes are spending little of their money inflow on\npurchasing high quality for the rest of the path. As the next claim\nshows, this is exactly what happens as the path grows long:\nClaim 1. If the only monitors are E2E, ROP, and PRc, \u221e\u2192CI\nas \u221e\u2192n , where n is the number of nodes on the data path.\nTo show that this is true, we first need the following lemma, which\nwill establish the difficulty of punishing nodes in the network.\nFirst a bit of notation: Recall that a cheater can benefit from its\nactions for 00 >t before other players can react. When a node\ncheats, it can expect a higher profit flow, at least until it is caught\nand other players react, perhaps by diverting traffic. Let node i\"s\nnormal profit flow be i\u03c0 , and her profit flow during cheating be\nsome greater value, yi. We will call the ratio, iiy \u03c0/ , the\ntemptation to cheat.\nLemma 1. If the only monitors are E2E, ROP, and PRc, the\ndiscounted time, \u2212nt\nrt\ne\n0\n, needed to punish a cheater increases at\nleast as fast as the product of the temptations to cheat along the data\npath,\n\u220f \u2212\u2212\n\u2265\n0\n0\npathdataon\n0\nt\nrt\ni i\ni\nt\nrt\ne\ny\ne\nn\n\u03c0\n(1)\nCorollary. If nodes share a minimum temptation to cheat, \u03c0/y ,\nthe discounted time needed to punish cheating increases at least\nexponentially in the length of the data path, n,\n\u2212\u2212\n\u2265\n0\n00\nt\nrt\nnt\nrt\ne\ny\ne\nn\n\u03c0\n(2)\nSince it is the discounted time that increases exponentially, the\nactual time increases faster than exponentially. If n is so large that\ntn is undefined, the given path cannot be maintained in equilibrium.\nProof. The proof proceeds by induction on the number of nodes on\nthe equilibrium data path, n. For 1=n , there is a single node, say i.\nBy cheating, the node earns extra profit ( ) \u2212\n\u2212\n0\n0\nt\nrt\nii ey \u03c0 . If node i\nis then punished until time 1t , the extra profit must be cancelled out\nby the lost profit between time 0t and 1t , \u22121\n0\nt\nt\nrt\ni e\u03c0 . A little\nmanipulation gives \u2212\u2212\n=\n01\n00\nt\nrt\ni\ni\nt\nrt\ne\ny\ne\n\u03c0\n, as required.\nFor 1>n , assume for induction that the claim holds for 1\u2212n . The\nsource does not know whether the cheater is the first hop, or after\nthe first hop. Because the source does not know the data path after\nthe first hop, it is unable to punish nodes beyond it. If it chooses a\nnew first hop, it might not affect the rest of the data path. Because\nof this, the source must rely on the first hop to punish cheating\nnodes farther along the path. The first hop needs discounted time,\n\u220f \u22120\n0\nhopfirstafter\nt\nrt\ni i\ni e\ny\n\u03c0\n, to accomplish this by assumption. So\nthe source must give the first hop this much discounted time in order\nto punish defectors further down the line (and the source will expect\npoor quality during this period).\nNext, the source must be protected against a first hop that cheats,\nand pretends that the problem is later in the path. The first hop can\n188\ndo this for the full discounted time, \u220f \u22120\n0\nhopfirstafter\nt\nrt\ni i\ni e\ny\n\u03c0\n, so\nthe source must punish the first hop long enough to remove the extra\nprofit it can make. Following the same argument as for 1=n , we\ncan show that the full discounted time is \u220f \u22120\n0\npathdataon\nt\nrt\ni i\ni e\ny\n\u03c0\n,\nwhich completes the proof.\nThe above lemma and its corollary show that punishing cheaters\nbecomes more and more difficult as the data path grows long, until\ndoing so is impossible. To capture some intuition behind this result,\nimagine that you are an end user, and you notice a sudden drop in\nservice quality. If your data only travels through your access\nprovider, you know it is that provider\"s fault. You can therefore\ntake your business elsewhere, at least for some time. This threat\nshould motivate your provider to maintain high quality.\nSuppose, on the other hand, that your data traverses two providers.\nWhen you complain to your ISP, he responds, yes, we know your\nquality went down, but it\"s not our fault, it\"s the next ISP. Give us\nsome time to punish them and then normal quality will resume. If\nyour access provider is telling the truth, you will want to listen,\nsince switching access providers may not even route around the\nactual offender. Thus, you will have to accept lower quality service\nfor some longer time. On the other hand, you may want to punish\nyour access provider as well, in case he is lying. This means you\nhave to wait longer to resume normal service. As more ISPs are\nadded to the path, the time increases in a recursive fashion.\nWith this lemma in hand, we can return to prove Claim 1.\nProof of Claim 1. Fix an equilibrium data path of length n. Label\nthe path nodes 1,2,\u2026,n. For each node i, let i\"s quality premium be\n'11 ++ \u2212= iii ppd . Then we have,\n[ ]\n=\n\u2212\n=\n\u2212\n+\n+\n=\n\u2212\n+\n++\n=\n+\n\u2212=\u2212\n\u2212\u2212\n\u2212\u2212\n=\n\u2212\u2212\n\u2212\n=\n\u2212\u2212\n=\nn\ni\ni\nn\ni iii\niii\nn\ni iii\nii\nn\ni i\niii\nC\ng\nnpcp\npcp\nn\npcp\npp\nnd\npcp\nn\nI\n1\n1\n1\n1\n1\n1\n1\n1\n1\n11\n1\n1\n1\n1\n1\n'1\n'11\n, (3)\nwhere gi is node i\"s temptation to cheat by routing to the lowest\nprice next hop. Lemma 1 tells us that Tg\nn\ni\ni <\u220f\n=1\n, where\n( )01 rt\neT \u2212\n\u2212= . It requires a bit of calculus to show that IC is\nminimized by setting each gi equal to n\nT /1\n. However, as \u221e\u2192n ,\nwe have 1/1\n\u2192n\nT , which shows that \u221e\u2192CI .\nAccording to the claim, as the data path grows long, it increasingly\nresembles a lowest-price path. Since lowest-price routing does not\nsupport innovation, we may speculate that innovation degrades with\nthe length of the data path. Though we suspect stronger claims are\npossible, we can demonstrate one such result by including an extra\nassumption:\nAvailable Bargain Path: A competitive market exists for\nlowcost transit, such that every node can route to the destination for\nno more than flow payment, lp .\nClaim 2. Under the available bargain path assumption, if node i , a\ndistance n from S, can invest to alter its quality, and the source will\nspend no more than sP for a route including node i\"s new quality,\nthen the payment to node i, p, decreases hyperbolically with n,\n( )\n( ) s\nn\nl P\nn\nT\npp\n1\n1/1\n\u2212\n+\u2264\n\u2212\n, (4)\nwhere ( )01 rt\neT \u2212\n\u2212= is the bound on the product of temptations\nfrom the previous claim. Thus, i will spend no more than\n( )\n( )\u2212\n+\n\u2212\ns\nn\nl P\nn\nT\np\nr 1\n1 1/1\non this quality improvement, which\napproaches the bargain path\"s payment,\nr\npl\n, as \u221e\u2192n .\nThe proof is given in the appendix. As a node gets farther from the\nsource, its maximum payment approaches the bargain price, pl.\nHence, the reward for innovation is bounded by the same amount.\nLarge innovations, meaning substantially more expensive than\nrpl / , will not be pursued deep into the network.\nClaim 2 can alternately be viewed as a lower bound on how much it\ncosts to elicit innovation in a network. If the source S wants node i\nto innovate, it needs to get a motivating payment, p, to i during the\nrouting stage. However, it must also pay the nodes on the way to i a\npremium in order to motivate them to route properly. The claim\nshows that this premium increases with the distance to i, until it\ndwarfs the original payment, p.\nOur claims stand in sharp contrast to our null hypothesis from the\nintroduction. Comparing the intuitive argument that supported our\nhypothesis with these claims, we can see that we implicitly used an\noversimplified model of market pressure (as either present or not).\nAs is now clear, market pressure relies on the decisions of\ncustomers, but these are limited by the lack of information. Hence,\ncompetitive forces degrade as the network deepens.\n4. VERIFIABLE MONITORS\nIn this section, we begin to introduce more accountability into the\nnetwork. Recall that in the previous section, we assumed that\nplayers couldn\"t convince each other of their private information.\nWhat would happen if they could? If a monitor\"s informational\nsignal can be credibly conveyed to others, we will call it a verifiable\nmonitor. The monitor\"s output in this case can be thought of as a\nstatement accompanied by a proof, a string that can be processed by\nany player to determine that the statement is true.\nA verifiable monitor is a distributed algorithmic mechanism that\nruns on the network graph, and outputs, to specific nodes, proofs\nabout current or past network behavior.\nAlong these lines, we can imagine verifiable counterparts to E2E\nand ROP. We will label these E2Ev and ROPv. With these\nmonitors, each node observes the quality of the rest of the path and\ncan also convince other players of these observations by giving\nthem a proof.\n189\nBy adding verifiability to our monitors, identifying a single cheater\nis straightforward. The cheater is the node that cannot produce\nproof that the rest of path quality decreased. This means that the\nnegative results of the previous section no longer hold. For\nexample, the following lemma stands in contrast to Lemma 1.\nLemma 2. With monitors E2Ev, ROPv, and PRc, and provided that\nthe node before each potential cheater has an alternate next hop that\nisn\"t more expensive, it is possible to enforce any data path in SPE\nso long as the maximum temptation is less than what can be deterred\nin finite time,\n\u2212\n\u2264\n0\n0\nmax\n1\nt\nrt\ner\ny\n\u03c0\n(5)\nProof. This lemma follows because nodes can share proofs to\nidentify who the cheater is. Only that node must be punished in\nequilibrium, and the preceding node does not lose any payoff in\nadministering the punishment.\nWith this lemma in mind, it is easy to construct counterexamples to\nClaim 1 and Claim 2 in this new environment.\nUnfortunately, there are at least four reasons not to be satisfied with\nthis improved monitoring system. The first, and weakest reason is\nthat the maximum temptation remains finite, causing some\ndistortion in routes or payments. Each node along a route must\nextract some positive profit unless the next hop is also the cheapest.\nOf course, if t0 is small, this effect is minimal.\nThe second, and more serious reason is that we have always given\nour source the ability to commit to any punishment. Real world\nusers are less likely to act collectively, and may simply search for\nthe best service currently offered. Since punishment phases are\ngenerally characterized by a drop in quality, real world end-users\nmay take this opportunity to shop for a new access provider. This\nwill make nodes less motivated to administer punishments.\nThe third reason is that Lemma 2 does not apply to cheating by\ncoalitions. A coalition node may pretend to punish its successor,\nbut instead enjoy a secret payment from the cheating node.\nAlternately, a node may bribe its successor to cheat, if the\npunishment phase is profitable, and so forth. The required\ndiscounted time for punishment may increase exponentially in the\nnumber of coalition members, just as in the previous section!\nThe final reason not to accept this monitoring system is that when a\ncheater is punished, the path will often be routed around not just the\noffender, but around other nodes as well. Effectively, innocent\nnodes will be punished along with the guilty. In our abstract model,\nthis doesn\"t cause trouble since the punishment falls off the\nequilibrium path. The effects are not so benign in the real world.\nWhen ISPs lie in sequence along a data path, they contribute\ncomplementary services, and their relationship is vertical. From the\nperspective of other source-destination pairs, however, these same\nfirms are likely to be horizontal competitors. Because of this, a\nnode might deliberately cheat, in order to trigger punishment for\nitself and its neighbors. By cheating, the node will save money to\nsome extent, so the cheater is likely to emerge from the punishment\nphase better off than the innocent nodes. This may give the cheater\na strategic advantage against its competitors. In the extreme, the\ncheater may use such a strategy to drive neighbors out of business,\nand thereby gain a monopoly on some routes.\n5. CONTRACTIBLE MONITORS\nAt the end of the last section, we identified several drawbacks that\npersist in an environment with E2Ev, ROPv, and PRc. In this\nsection, we will show how all of these drawbacks can be overcome.\nTo do this, we will require our third and final category of monitor:\nA contractible monitor is simply a verifiable monitor that generates\nproofs that can serve as input to a contract. Thus, contractible is\njointly a property of the monitor and the institutions that must verify\nits proofs. Contractibility requires that a court,\n1. Can verify the monitor\"s proofs.\n2. Can understand what the proofs and contracts represent to\nthe extent required to police illegal activity.\n3. Can enforce payments among contracting parties.\nUnderstanding the agreements between companies has traditionally\nbeen a matter of reading contracts on paper. This may prove to be a\nharder task in a future network setting. Contracts may plausibly be\nnegotiated by machine, be numerous, even per-flow, and be further\ncomplicated by the many dimensions of quality.\nWhen a monitor (together with institutional infrastructure) meets\nthese criteria, we will label it with a subscript c, for contractible.\nThe reader may recall that this is how we labeled the packets\nreceived monitor, PRc, which allows ISPs to form contracts with\nper-packet payments. Similarly, E2Ec and ROPc are contractible\nversions of the monitors we are now familiar with.\nAt the end of the previous section, we argued for some desirable\nproperties that we\"d like our solution to have. Briefly, we would\nlike to enforce optimal data paths with an equilibrium concept that\ndoesn\"t rely on re-routing for punishment, is coalition proof, and\ndoesn\"t punish innocent nodes when a coalition cheats. We will call\nsuch an equilibrium a fixed-route coalition-proof\nprotect-theinnocent equilibrium.\nAs the next claim shows, ROPc allows us to create a system of\nlinear (price, quality) contracts under just such an equilibrium.\nClaim 3. With ROPc, for any feasible and consistent assignment of\nrest of path qualities to nodes, and any corresponding payment\nschedule that yields non-negative payoffs, these qualities can be\nmaintained with bilateral contracts in a fixed-route coalition-proof\nprotect-the-innocent equilibrium.\nProof: Fix any data path consistent with the given rest of path\nqualities. Select some monetary punishment, P, large enough to\nprevent any cheating for time t0 (the discounted total payment from\nthe source will work). Let each node on the path enter into a\ncontract with its parent, which fixes an arbitrary payment schedule\nso long as the rest of path quality is as prescribed. When the parent\nnode, which has ROPc, submits a proof that the rest of path quality\nis less than expected, the contract awards her an instantaneous\ntransfer, P, from the downstream node. Such proofs can be\nsubmitted every 0t for the previous interval.\nSuppose now that a coalition, C, decides to cheat. The source\nmeasures a decrease in quality, and according to her contract, is\nawarded P from the first hop. This means that there is a net outflow\nof P from the ISPs as a whole. Suppose that node i is not in C. In\norder for the parent node to claim P from i, it must submit proof that\nthe quality of the path starting at i is not as prescribed. This means\n190\nthat there is a cheater after i. Hence, i would also have detected a\nchange in quality, so i can claim P from the next node on the path.\nThus, innocent nodes are not punished. The sequence of payments\nmust end by the destination, so the net outflow of P must come from\nthe nodes in C. This establishes all necessary conditions of the\nequilibrium.\nEssentially, ROPc allows for an implementation of (price, quality)\ncontracts. Building upon this result, we can construct competition\ngames in which nodes offer various qualities to each other at\nspecified prices, and can credibly commit to meet these\nperformance targets, even allowing for coalitions and a desire to\ndamage other ISPs.\nExample 1. Define a Stackelberg price-quality competition game\nas follows: Extend the partial order of nodes induced by the graph\nto any complete ordering, such that downstream nodes appear\nbefore their parents. In this order, each node selects a contract to\noffer to its parents, consisting of a rest of path quality, and a linear\nprice. In the routing game, each node selects a next hop at every\ntime, consistent with its advertised rest of path quality. The\nStackelberg price-quality competition game can be implemented in\nour model with ROPc monitors, by using the strategy in the proof,\nabove. It has the following useful property:\nClaim 4. The Stackelberg price-quality competition game yields\noptimal routes in SPE.\nThe proof is given in the appendix. This property is favorable from\nan innovation perspective, since firms that invest in high quality will\ntend to fall on the optimal path, gaining positive payoff. In general,\nhowever, investments may be over or under rewarded. Extra\nconditions may be given under which innovation decisions approach\nperfect efficiency for large innovations. We omit the full analysis\nhere.\nExample 2. Alternately, we can imagine that players report their\nprivate information to a central authority, which then assigns all\ncontracts. For example, contracts could be computed to implement\nthe cost-minimizing VCG mechanism proposed by Feigenbaum, et\nal. in [7]. With ROPc monitors, we can adapt this mechanism to\nmaximize welfare. For node, i, on the optimal path, L, the net\npayment must equal, essentially, its contribution to the welfare of S,\nD, and the other nodes. If L\" is an optimal path in the graph with i\nremoved, the profit flow to i is,\n( ) ( )\n\u2208\u2260\u2208\n+\u2212\u2212\n',\n'\nLj\nj\nijLj\njLL ccququ , (6)\nwhere Lq and 'Lq are the qualities of the two paths. Here, (price,\nquality) contracts ensure that nodes report their qualities honestly.\nThe incentive structure of the VCG mechanism is what motivates\nnodes to report their costs accurately.\nA nice feature of this game is that individual innovation decisions\nare efficient, meaning that a node will invest in an innovation\nwhenever the investment cost is less than the increased welfare of\nthe optimal data path. Unfortunately, the source may end up paying\nmore than the utility of the path.\nNotice that with just E2Ec, a weaker version of Claim 3 holds.\nBilateral (price, quality) contracts can be maintained in an\nequilibrium that is fixed-route and coalition-proof, but not\nprotectthe-innocent. This is done by writing contracts to punish everyone\non the path when the end to end quality drops. If the path length is\nn, the first hop pays nP to the source, the second hop pays ( )Pn 1\u2212\nto the first, and so forth. This ensures that every node is punished\nsufficiently to make cheating unprofitable. For the reasons we gave\npreviously, we believe that this solution concept is less than ideal,\nsince it allows for malicious nodes to deliberately trigger\npunishments for potential competitors.\nUp to this point, we have adopted fixed-route coalition-proof\nprotect-the-innocent equilibrium as our desired solution concept,\nand shown that ROPc monitors are sufficient to create some\ncompetition games that are desirable in terms of service diversity\nand innovation. As the next claim will show, rest of path\nmonitoring is also necessary to construct such games under our\nsolution concept.\nBefore we proceed, what does it mean for a game to be desirable\nfrom the perspective of service diversity and innovation? We will\nuse a very weak assumption, essentially, that the game is not fully\ncommoditized for any node. The claim will hold for this entire class\nof games.\nDefinition: A competition game is nowhere-commoditized if for\neach node, i, not adjacent to D, there is some assignment of qualities\nand marginal costs to nodes, such that the optimal data path includes\ni, and i has a positive temptation to cheat.\nIn the case of linear contracts, it is sufficient to require that \u221e\n\u2212\n\u2212\n\u22c5\u22c5\n\u2212\n\u2212\n\u2212\n\u2212\n\u2265\nn\nS\nl\nn\nlll\nP\npp\nn\npp\npp\npp\npp\npp\npp\nT (7)\nThis can be rearranged to give\n( )\n( ) s\nn\nl P\nn\nT\npp\n1\n1/1\n\u2212\n+\u2264\n\u2212\n, as required.\nThe rest of the claim simply recognizes that rp / is the greatest\nreward node i can receive for its investment, so it will not invest\nsums greater than this.\nProof of Claim 4. Label the nodes 1,2,.. N in the order in which\nthey select contracts. Let subgame n be the game that begins with n\nchoosing its contract. Let Ln be the set of possible paths restricted to\nnodes n,\u2026,N. That is, Ln is the set of possible routes from S to\nreach some node that has already moved.\nFor subgame n, define the local welfare over paths nLl \u2208 , and their\npossible next hops, nj < as follows,\n( ) ( ) j\nli\nipathjl pcqqujlV \u2212\u2212=\n\u2208\n*, , (8)\nwhere ql is the quality of path l in the set {n,\u2026,N}, and pathjq and\npj are the quality and price of the contract j has offered.\nFor induction, assume that subgame n + 1 maximizes local welfare.\nWe show that subgame n does as well. If node n selects next hop k,\nwe can write the following relation,\n( ) ( )( ) ( )( ) nknn knlVpcpknlVnlV \u03c0\u2212=++\u2212= ,,,,, , (9)\nwhere n is node n\"s profit if the path to n is chosen. This path is\nchosen whenever ( )nlV , is maximal over Ln+1 and possible next\nhops. If ( )( )knlV ,, is maximal over Ln, it is also maximal over the\npaths in Ln+1 that don\"t lead to n. This means that node n can choose\nsome n small enough so that ( )nlV , is maximal over Ln+1, so the\nroute will lead to k.\nConversely, if ( )( )knlV ,, is not maximal over Ln, either V is greater\nfor another of n\"s next hops, in which case n will select that one in\norder to increase n, or V is greater for some path in Ln+1 that don\"t\nlead to n, in which case ( )nlV , cannot be maximal for any\nnonnegative n.\nThus, we conclude that subgame n maximizes local welfare. For the\ninitial case, observe that this assumption holds for the source.\nFinally, we deduce that subgame 1, which is the entire game,\nmaximizes local welfare, which is equivalent to actual welfare.\nHence, the Stackelberg price-quality game yields an optimal route.\n194", "keywords": "monitor;clean-slate architectural design;contracting system;verifiable monitor;innovation;contract;routing policy;network monitor;routing stagequality;contractible monitor;commoditization;smart market;incentive"} {"name": "train_C-65", "title": "Shooter Localization and Weapon Classification with Soldier-Wearable Networked Sensors", "abstract": "The paper presents a wireless sensor network-based mobile countersniper system. A sensor node consists of a helmetmounted microphone array, a COTS MICAz mote for internode communication and a custom sensorboard that implements the acoustic detection and Time of Arrival (ToA) estimation algorithms on an FPGA. A 3-axis compass provides self orientation and Bluetooth is used for communication with the soldier\"s PDA running the data fusion and the user interface. The heterogeneous sensor fusion algorithm can work with data from a single sensor or it can fuse ToA or Angle of Arrival (AoA) observations of muzzle blasts and ballistic shockwaves from multiple sensors. The system estimates the trajectory, the range, the caliber and the weapon type. The paper presents the system design and the results from an independent evaluation at the US Army Aberdeen Test Center. The system performance is characterized by 1degree trajectory precision and over 95% caliber estimation accuracy for all shots, and close to 100% weapon estimation accuracy for 4 out of 6 guns tested.", "fulltext": "1. INTRODUCTION\nThe importance of countersniper systems is underscored\nby the constant stream of news reports coming from the\nMiddle East. In October 2006 CNN reported on a new\ntactic employed by insurgents. A mobile sniper team moves\naround busy city streets in a car, positions itself at a good\nstandoff distance from dismounted US military personnel,\ntakes a single well-aimed shot and immediately melts in the\ncity traffic. By the time the soldiers can react, they are\ngone. A countersniper system that provides almost\nimmediate shooter location to every soldier in the vicinity would\nprovide clear benefits to the warfigthers.\nOur team introduced PinPtr, the first sensor\nnetworkbased countersniper system [17, 8] in 2003. The system is\nbased on potentially hundreds of inexpensive sensor nodes\ndeployed in the area of interest forming an ad-hoc multihop\nnetwork. The acoustic sensors measure the Time of Arrival\n(ToA) of muzzle blasts and ballistic shockwaves, pressure\nwaves induced by the supersonic projectile, send the data to\na base station where a sensor fusion algorithm determines\nthe origin of the shot. PinPtr is characterized by high\nprecision: 1m average 3D accuracy for shots originating within\nor near the sensor network and 1 degree bearing precision\nfor both azimuth and elevation and 10% accuracy in range\nestimation for longer range shots. The truly unique\ncharacteristic of the system is that it works in such reverberant\nenvironments as cluttered urban terrain and that it can\nresolve multiple simultaneous shots at the same time. This\ncapability is due to the widely distributed sensing and the\nunique sensor fusion approach [8]. The system has been\ntested several times in US Army MOUT (Military\nOperations in Urban Terrain) facilities.\nThe obvious disadvantage of such a system is its static\nnature. Once the sensors are distributed, they cover a\ncertain area. Depending on the operation, the deployment may\nbe needed for an hour or a month, but eventually the area\nlooses its importance. It is not practical to gather and reuse\nthe sensors, especially under combat conditions. Even if the\nsensors are cheap, it is still a waste and a logistical problem\nto provide a continuous stream of sensors as the operations\nmove from place to place. As it is primarily the soldiers that\nthe system protects, a natural extension is to mount the\nsensors on the soldiers themselves. While there are\nvehiclemounted countersniper systems [1] available commercially,\nwe are not aware of a deployed system that protects\ndismounted soldiers. A helmet-mounted system was developed\nin the mid 90s by BBN [3], but it was not continued beyond\nthe Darpa program that funded it.\n113\nTo move from a static sensor network-based solution to a\nhighly mobile one presents significant challenges. The sensor\npositions and orientation need to be constantly monitored.\nAs soldiers may work in groups of as little as four people,\nthe number of sensors measuring the acoustic phenomena\nmay be an order of magnitude smaller than before.\nMoreover, the system should be useful to even a single soldier.\nFinally, additional requirements called for caliber estimation\nand weapon classification in addition to source localization.\nThe paper presents the design and evaluation of our\nsoldierwearable mobile countersniper system. It describes the\nhardware and software architecture including the custom sensor\nboard equipped with a small microphone array and\nconnected to a COTS MICAz mote [12]. Special emphasis is\npaid to the sensor fusion technique that estimates the\ntrajectory, range, caliber and weapon type simultaneously. The\nresults and analysis of an independent evaluation of the\nsystem at the US Army Aberdeen Test Center are also\npresented.\n2. APPROACH\nThe firing of a typical military rifle, such as the AK47\nor M16, produces two distinct acoustic phenomena. The\nmuzzle blast is generated at the muzzle of the gun and\ntravels at the speed sound. The supersonic projectile generates\nan acoustic shockwave, a kind of sonic boom. The\nwavefront has a conical shape, the angle of which depends on the\nMach number, the speed of the bullet relative to the speed\nof sound.\nThe shockwave has a characteristic shape resembling a\ncapital N. The rise time at both the start and end of the\nsignal is very fast, under 1 \u03bcsec. The length is determined by\nthe caliber and the miss distance, the distance between the\ntrajectory and the sensor. It is typically a few hundred \u03bcsec.\nOnce a trajectory estimate is available, the shockwave length\ncan be used for caliber estimation.\nOur system is based on four microphones connected to\na sensorboard. The board detects shockwaves and muzzle\nblasts and measures their ToA. If at least three acoustic\nchannels detect the same event, its AoA is also computed.\nIf both the shockwave and muzzle blast AoA are available,\na simple analytical solution gives the shooter location as\nshown in Section 6. As the microphones are close to each\nother, typically 2-4, we cannot expect very high precision.\nAlso, this method does not estimate a trajectory. In fact, an\ninfinite number of trajectory-bullet speed pairs satisfy the\nobservations. However, the sensorboards are also connected\nto COTS MICAz motes and they share their AoA and ToA\nmeasurements, as well as their own location and orientation,\nwith each other using a multihop routing service [9]. A\nhybrid sensor fusion algorithm then estimates the trajectory,\nthe range, the caliber and the weapon type based on all\navailable observations.\nThe sensorboard is also Bluetooth capable for\ncommunication with the soldier\"s PDA or laptop computer. A wired\nUSB connection is also available. The sensorfusion\nalgorithm and the user interface get their data through one of\nthese channels.\nThe orientation of the microphone array at the time of\ndetection is provided by a 3-axis digital compass. Currently\nthe system assumes that the soldier\"s PDA is GPS-capable\nand it does not provide self localization service itself.\nHowever, the accuracy of GPS is a few meters degrading the\nFigure 1: Acoustic sensorboard/mote assembly\n.\noverall accuracy of the system. Refer to Section 7 for an\nanalysis. The latest generation sensorboard features a Texas\nInstruments CC-1000 radio enabling the high-precision radio\ninterferometric self localization approach we have developed\nseparately [7]. However, we leave the integration of the two\ntechnologies for future work.\n3. HARDWARE\nSince the first static version of our system in 2003, the\nsensor nodes have been built upon the UC Berkeley/Crossbow\nMICA product line [11]. Although rudimentary acoustic\nsignal processing can be done on these microcontroller-based\nboards, they do not provide the required computational\nperformance for shockwave detection and angle of arrival\nmeasurements, where multiple signals from different\nmicrophones need to be processed in parallel at a high sampling\nrate. Our 3rd generation sensorboard is designed to be used\nwith MICAz motes-in fact it has almost the same size as\nthe mote itself (see Figure 1).\nThe board utilizes a powerful Xilinx XC3S1000 FPGA\nchip with various standard peripheral IP cores, multiple soft\nprocessor cores and custom logic for the acoustic detectors\n(Figure 2). The onboard Flash (4MB) and PSRAM (8MB)\nmodules allow storing raw samples of several acoustic events,\nwhich can be used to build libraries of various acoustic\nsignatures and for refining the detection cores off-line. Also, the\nexternal memory blocks can store program code and data\nused by the soft processor cores on the FPGA.\nThe board supports four independent analog channels\nsampled at up to 1 MS/s (million samples per seconds). These\nchannels, featuring an electret microphone (Panasonic\nWM64PNT), amplifiers with controllable gain (30-60 dB) and\na 12-bit serial ADC (Analog Devices AD7476), reside on\nseparate tiny boards which are connected to the main\nsensorboard with ribbon cables. This partitioning enables the\nuse of truly different audio channels (eg.: slower sampling\nfrequency, different gain or dynamic range) and also results\nin less noisy measurements by avoiding long analog signal\npaths.\nThe sensor platform offers a rich set of interfaces and can\nbe integrated with existing systems in diverse ways. An\nRS232 port and a Bluetooth (BlueGiga WT12) wireless link\nwith virtual UART emulation are directly available on the\nboard and provide simple means to connect the sensor to\nPCs and PDAs. The mote interface consists of an I2\nC bus\nalong with an interrupt and GPIO line (the latter one is used\n114\nFigure 2: Block diagram of the sensorboard.\nfor precise time synchronization between the board and the\nmote). The motes are equipped with IEEE 802.15.4\ncompliant radio transceivers and support ad-hoc wireless\nnetworking among the nodes and to/from the base station. The\nsensorboard also supports full-speed USB transfers (with\ncustom USB dongles) for uploading recorded audio samples\nto the PC. The on-board JTAG chain-directly accessible\nthrough a dedicated connector-contains the FPGA part\nand configuration memory and provides in-system\nprogramming and debugging facilities.\nThe integrated Honeywell HMR3300 digital compass\nmodule provides heading, pitch and roll information with 1\u25e6\naccuracy, which is essential for calculating and combining\ndirectional estimates of the detected events.\nDue to the complex voltage requirements of the FPGA,\nthe power supply circuitry is implemented on the\nsensorboard and provides power both locally and to the mote. We\nused a quad pack of rechargeable AA batteries as the power\nsource (although any other configuration is viable that meets\nthe voltage requirements). The FPGA core (1.2 V) and I/O\n(3.3 V) voltages are generated by a highly efficient buck\nswitching regulator. The FPGA configuration (2.5 V) and a\nseparate 3.3 V power net are fed by low current LDOs, the\nlatter one is used to provide independent power to the mote\nand to the Bluetooth radio. The regulators-except the last\none-can be turned on/off from the mote or through the\nBluetooth radio (via GPIO lines) to save power.\nThe first prototype of our system employed 10 sensor\nnodes. Some of these nodes were mounted on military kevlar\nhelmets with the microphones directly attached to the\nsurface at about 20 cm separation as shown in Figure 3(a). The\nrest of the nodes were mounted in plastic enclosures\n(Figure 3(b)) with the microphones placed near the corners of\nthe boxes to form approximately 5 cm\u00d710 cm rectangles.\n4. SOFTWARE ARCHITECTURE\nThe sensor application relies on three subsystems\nexploiting three different computing paradigms as they are shown\nin Figure 4. Although each of these execution models suit\ntheir domain specific tasks extremely well, this diversity\n(a) (b)\nFigure 3: Sensor prototypes mounted on a kevlar\nhelmet (a) and in a plastic box on a tripod (b).\npresents a challenge for software development and system\nintegration. The sensor fusion and user interface\nsubsystem is running on PDAs and were implemented in Java.\nThe sensing and signal processing tasks are executed by an\nFPGA, which also acts as a bridge between various wired\nand wireless communication channels. The ad-hoc internode\ncommunication, time synchronization and data sharing are\nthe responsibilities of a microcontroller based radio module.\nSimilarly, the application employs a wide variety of\ncommunication protocols such as Bluetooth and IEEE 802.14.5\nwireless links, as well as optional UARTs, I2\nC and/or USB\nbuses.\nSoldier\nOperated Device\n(PDA/Laptop)\nFPGA\nSensor Board\nMica Radio\nModule\n2.4 GHz Wireless Link\nRadio Control\nMessage Routing\nAcoustic Event Encoder\nSensor Time Synch.\nNetwork Time Synch.Remote Control\nTime\nstamping\nInterrupts\nVirtual\nRegister\nInterface\nC\nO\nO\nR\nD\nI\nN\nA\nT\nO\nR\nA\nn\na\nl\no\ng\nc\nh\na\nn\nn\ne\nl\ns Compass\nPicoBlaze\nComm.\nInterface\nPicoBlaze\nWT12 Bluetooth Radio\nMOTE IF:I2C,Interrupts\nUSB PSRAM\nU\nA\nR\nT\nU\nA\nR\nT\nMB\ndet\nSW\ndet\nREC\nBluetooth Link\nUser\nInterface\nSensor\nFusion\nLocation\nEngine GPS\nMessage (Dis-)AssemblerSensor\nControl\nFigure 4: Software architecture diagram.\nThe sensor fusion module receives and unpacks raw\nmeasurements (time stamps and feature vectors) from the\nsensorboard through the Bluetooth link. Also, it fine tunes\nthe execution of the signal processing cores by setting\nparameters through the same link. Note that measurements\nfrom other nodes along with their location and orientation\ninformation also arrive from the sensorboard which acts as\na gateway between the PDA and the sensor network. The\nhandheld device obtains its own GPS location data and\ndi115\nrectly receives orientation information through the\nsensorboard. The results of the sensor fusion are displayed on the\nPDA screen with low latency. Since, the application is\nimplemented in pure Java, it is portable across different PDA\nplatforms.\nThe border between software and hardware is\nconsiderably blurred on the sensor board. The IP\ncores-implemented in hardware description languages (HDL) on the\nreconfigurable FPGA fabric-closely resemble hardware\nbuilding blocks. However, some of them-most notably the soft\nprocessor cores-execute true software programs. The\nprimary tasks of the sensor board software are 1) acquiring\ndata samples from the analog channels, 2) processing\nacoustic data (detection), and 3) providing access to the results\nand run-time parameters through different interfaces.\nAs it is shown in Figure 4, a centralized virtual register\nfile contains the address decoding logic, the registers for\nstoring parameter values and results and the point to point\ndata buses to and from the peripherals. Thus, it effectively\nintegrates the building blocks within the sensorboard and\ndecouples the various communication interfaces. This\narchitecture enabled us to deploy the same set of sensors in a\ncentralized scenario, where the ad-hoc mote network (using\nthe I2\nC interface) collected and forwarded the results to a\nbase station or to build a decentralized system where the\nlocal PDAs execute the sensor fusion on the data obtained\nthrough the Bluetooth interface (and optionally from other\nsensors through the mote interface). The same set of\nregisters are also accessible through a UART link with a terminal\nemulation program. Also, because the low-level interfaces\nare hidden by the register file, one can easily add/replace\nthese with new ones (eg.: the first generation of motes\nsupported a standard \u03bcP interface bus on the sensor connector,\nwhich was dropped in later designs).\nThe most important results are the time stamps of the\ndetected events. These time stamps and all other timing\ninformation (parameters, acoustic event features) are based\non a 1 MHz clock and an internal timer on the FPGA. The\ntime conversion and synchronization between the sensor\nnetwork and the board is done by the mote by periodically\nrequesting the capture of the current timer value through a\ndedicated GPIO line and reading the captured value from\nthe register file through the I2\nC interface. Based on the the\ncurrent and previous readings and the corresponding mote\nlocal time stamps, the mote can calculate and maintain the\nscaling factor and offset between the two time domains.\nThe mote interface is implemented by the I2\nC slave IP\ncore and a thin adaptation layer which provides a data and\naddress bus abstraction on top of it. The maximum\neffective bandwidth is 100 Kbps through this interface. The\nFPGA contains several UART cores as well: for\ncommunicating with the on-board Bluetooth module, for\ncontrolling the digital compass and for providing a wired RS232\nlink through a dedicated connector. The control, status and\ndata registers of the UART modules are available through\nthe register file. The higher level protocols on these lines are\nimplemented by Xilinx PicoBlaze microcontroller cores [13]\nand corresponding software programs. One of them provides\na command line interface for test and debug purposes, while\nthe other is responsible for parsing compass readings. By\ndefault, they are connected to the RS232 port and to the\non-board digital compass line respectively, however, they\ncan be rewired to any communication interface by changing\nthe register file base address in the programs (e.g. the\ncommand line interface can be provided through the Bluetooth\nchannel).\nTwo of the external interfaces are not accessible through\nthe register file: a high speed USB link and the SRAM\ninterface are tied to the recorder block. The USB module\nimplements a simple FIFO with parallel data lines connected to an\nexternal FT245R USB device controller. The RAM driver\nimplements data read/write cycles with correct timing and\nis connected to the on-board pseudo SRAM. These\ninterfaces provide 1 MB/s effective bandwidth for downloading\nrecorded audio samples, for example.\nThe data acquisition and signal processing paths exhibit\nclear symmetry: the same set of IP cores are instantiated\nfour times (i.e. the number of acoustic channels) and run\nindependently. The signal paths meet only just before\nthe register file. Each of the analog channels is driven by\na serial A/D core for providing a 20 MHz serial clock and\nshifting in 8-bit data samples at 1 MS/s and a digital\npotentiometer driver for setting the required gain. Each channel\nhas its own shockwave and muzzle blast detector, which are\ndescribed in Section 5. The detectors fetch run-time\nparameter values from the register file and store their results there\nas well. The coordinator core constantly monitors the\ndetection results and generates a mote interrupt promptly upon\nfull detection or after a reasonable timeout after partial\ndetection.\nThe recorder component is not used in the final\ndeployment, however, it is essential for development purposes for\nrefining parameter values for new types of weapons or for\nother acoustic sources. This component receives the\nsamples from all channels and stores them in circular buffers in\nthe PSRAM device. If the signal amplitude on one of the\nchannels crosses a predefined threshold, the recorder\ncomponent suspends the sample collection with a predefined delay\nand dumps the contents of the buffers through the USB link.\nThe length of these buffers and delays, the sampling rate,\nthe threshold level and the set of recorded channels can be\n(re)configured run-time through the register file. Note that\nthe core operates independently from the other signal\nprocessing modules, therefore, it can be used to validate the\ndetection results off-line.\nThe FPGA cores are implemented in VHDL, the PicoBlaze\nprograms are written in assembly. The complete\nconfiguration occupies 40% of the resources (slices) of the FPGA and\nthe maximum clock speed is 30 MHz, which is safely higher\nthan the speed used with the actual device (20MHz).\nThe MICAz motes are responsible for distributing\nmeasurement data across the network, which drastically\nimproves the localization and classification results at each node.\nBesides a robust radio (MAC) layer, the motes require two\nessential middleware services to achieve this goal. The\nmessages need to be propagated in the ad-hoc multihop network\nusing a routing service. We successfully integrated the\nDirected Flood-Routing Framework (DFRF) [9] in our\napplication. Apart from automatic message aggregation and\nefficient buffer management, the most unique feature of DFRF\nis its plug-in architecture, which accepts custom routing\npolicies. Routing policies are state machines that govern\nhow received messages are stored, resent or discarded.\nExample policies include spanning tree routing, broadcast,\ngeographic routing, etc. Different policies can be used for\ndifferent messages concurrently, and the application is able to\n116\nchange the underlying policies at run-time (eg.: because of\nthe changing RF environment or power budget). In fact, we\nswitched several times between a simple but lavish broadcast\npolicy and a more efficient gradient routing on the field.\nCorrelating ToA measurements requires a common time\nbase and precise time synchronization in the sensor network.\nThe Routing Integrated Time Synchronization (RITS) [15]\nprotocol relies on very accurate MAC-layer time-stamping\nto embed the cumulative delay that a data message accrued\nsince the time of the detection in the message itself. That\nis, at every node it measures the time the message spent\nthere and adds this to the number in the time delay slot of\nthe message, right before it leaves the current node. Every\nreceiving node can subtract the delay from its current time\nto obtain the detection time in its local time reference. The\nservice provides very accurate time conversion (few \u03bcs per\nhop error), which is more than adequate for this application.\nNote, that the motes also need to convert the sensorboard\ntime stamps to mote time as it is described earlier.\nThe mote application is implemented in nesC [5] and is\nrunning on top of TinyOS [6]. With its 3 KB RAM and\n28 KB program space (ROM) requirement, it easily fits on\nthe MICAz motes.\n5. DETECTION ALGORITHM\nThere are several characteristics of acoustic shockwaves\nand muzzle blasts which distinguish their detection and\nsignal processing algorithms from regular audio applications.\nBoth events are transient by their nature and present very\nintense stimuli to the microphones. This is increasingly\nproblematic with low cost electret microphones-designed\nfor picking up regular speech or music. Although\nmechanical damping of the microphone membranes can mitigate\nthe problem, this approach is not without side effects. The\ndetection algorithms have to be robust enough to handle\nsevere nonlinear distortion and transitory oscillations. Since\nthe muzzle blast signature closely follows the shockwave\nsignal and because of potential automatic weapon bursts, it is\nextremely important to settle the audio channels and the\ndetection logic as soon as possible after an event. Also,\nprecise angle of arrival estimation necessitates high sampling\nfrequency (in the MHz range) and accurate event detection.\nMoreover, the detection logic needs to process multiple\nchannels in parallel (4 channels on our existing hardware).\nThese requirements dictated simple and robust algorithms\nboth for muzzle blast and shockwave detections. Instead of\nusing mundane energy detectors-which might not be able\nto distinguish the two different events-the applied\ndetectors strive to find the most important characteristics of the\ntwo signals in the time-domain using simple state machine\nlogic. The detectors are implemented as independent IP\ncores within the FPGA-one pair for each channel. The\ncores are run-time configurable and provide detection event\nsignals with high precision time stamps and event specific\nfeature vectors. Although the cores are running\nindependently and in parallel, a crude local fusion module integrates\nthem by shutting down those cores which missed their events\nafter a reasonable timeout and by generating a single\ndetection message towards the mote. At this point, the mote can\nread and forward the detection times and features and is\nresponsible to restart the cores afterwards.\nThe most conspicuous characteristics of an acoustic\nshockwave (see Figure 5(a)) are the steep rising edges at the\nbe0 200 400 600 800 1000 1200 1400 1600\n-1\n-0.8\n-0.6\n-0.4\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\nShockwave (M16)\nTime (\u00b5s)\nAmplitude\n1\n3\n5\n2\n4\nlen\n(a)\ns[t] - s[t-D] > E\ntstart\n:= t\ns[t] - s[t-D] < E\ns[t] - s[t-D] > E &\nt - t_start > Lmin\ns[t] - s[t-D] < E\nlen := t - tstart\nIDLE\n1\nFIRST EDGE DONE\n3\nSECOND EDGE\n4\nFIRST EDGE\n2\nFOUND\n5\nt - tstart\n\u2265 Lmax\nt - tstart\n\u2265 Lmax\n(b)\nFigure 5: Shockwave signal generated by a 5.56 \u00d7\n45 mm NATO projectile (a) and the state machine\nof the detection algorithm (b).\nginning and end of the signal. Also, the length of the N-wave\nis fairly predictable-as it is described in Section 6.5-and is\nrelatively short (200-300 \u03bcs). The shockwave detection core\nis continuously looking for two rising edges within a given\ninterval. The state machine of the algorithm is shown in\nFigure 5(b). The input parameters are the minimum\nsteepness of the edges (D, E), and the bounds on the length of\nthe wave (Lmin, Lmax). The only feature calculated by the\ncore is the length of the observed shockwave signal.\nIn contrast to shockwaves, the muzzle blast signatures are\ncharacterized by a long initial period (1-5 ms) where the first\nhalf period is significantly shorter than the second half [4].\nDue to the physical limitations of the analog circuitry\ndescribed at the beginning of this section, irregular oscillations\nand glitches might show up within this longer time window\nas they can be clearly seen in Figure 6(a). Therefore, the real\nchallenge for the matching detection core is to identify the\nfirst and second half periods properly. The state machine\n(Figure 6(b)) does not work on the raw samples directly\nbut is fed by a zero crossing (ZC) encoder. After the initial\ntriggering, the detector attempts to collect those ZC\nsegments which belong to the first period (positive amplitude)\nwhile discarding too short (in our terminology: garbage)\nsegments-effectively implementing a rudimentary low-pass\nfilter in the ZC domain. After it encounters a sufficiently\nlong negative segment, it runs the same collection logic for\nthe second half period. If too much garbage is discarded\nin the collection phases, the core resets itself to prevent the\n(false) detection of the halves from completely different\nperiods separated by rapid oscillation or noise. Finally, if the\nconstraints on the total length and on the length ratio hold,\nthe core generates a detection event along with the actual\nlength, amplitude and energy of the period calculated\nconcurrently. The initial triggering mechanism is based on two\namplitude thresholds: one static (but configurable)\namplitude level and a dynamically computed one. The latter one\nis essential to adapt the sensor to different ambient noise\nenvironments and to temporarily suspend the muzzle blast\ndetector after a shock wave event (oscillations in the analog\nsection or reverberations in the sensor enclosure might\notherwise trigger false muzzle blast detections). The dynamic\nnoise level is estimated by a single pole recursive low-pass\nfilter (cutoff @ 0.5 kHz ) on the FPGA.\n117\n0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000\n-1\n-0.8\n-0.6\n-0.4\n-0.2\n0\n0.2\n0.4\n0.6\n0.8\n1\nTime (\u00b5s)\nAmplitude\nMuzzle blast (M16)\n1\n2\n3\n4 5\nlen2\n+\nlen1\n(a)\nIDLE\n1\nSECOND ZC\n3\nPENDING ZC\n4\nFIRST ZC\n2\nFOUND\n5\namplitude\nthreshold\nlong\npositive ZC\nlong\nnegative ZC\nvalid\nfull period\nmax\ngarbage\nwrong sign\ngarbage\ncollect\nfirst period\ngarbage\ncollect\nfirst period\ngarbage\n(b)\nFigure 6: Muzzle blast signature (a) produced by an\nM16 assault rifle and the corresponding detection\nlogic (b).\nThe detection cores were originally implemented in Java\nand evaluated on pre-recorded signals because of much faster\ntest runs and more convenient debugging facilities. Later\non, they were ported to VHDL and synthesized using the\nXilinx ISE tool suite. The functional equivalence between\nthe two implementations were tested by VHDL test benches\nand Python scripts which provided an automated way to\nexercise the detection cores on the same set of pre-recorded\nsignals and to compare the results.\n6. SENSOR FUSION\nThe sensor fusion algorithm receives detection messages\nfrom the sensor network and estimates the bullet trajectory,\nthe shooter position, the caliber of the projectile and the\ntype of the weapon. The algorithm consists of well separated\ncomputational tasks outlined below:\n1. Compute muzzle blast and shockwave directions of\narrivals for each individual sensor (see 6.1).\n2. Compute range estimates. This algorithm can\nanalytically fuse a pair of shockwave and muzzle blast AoA\nestimates. (see 6.2).\n3. Compute a single trajectory from all shockwave\nmeasurements (see 6.3).\n4. If trajectory available compute range (see 6.4).\nelse compute shooter position first and then trajectory\nbased on it. (see 6.4)\n5. If trajectory available compute caliber (see 6.5).\n6. If caliber available compute weapon type (see 6.6).\nWe describe each step in the following sections in detail.\n6.1 Direction of arrival\nThe first step of the sensor fusion is to calculate the\nmuzzle blast and shockwave AoA-s for each sensorboard. Each\nsensorboard has four microphones that measure the ToA-s.\nSince the microphone spacing is orders of magnitude smaller\nthan the distance to the sound source, we can approximate\nthe approaching sound wave front with a plane (far field\nassumption).\nLet us formalize the problem for 3 microphones first. Let\nP1, P2 and P3 be the position of the microphones ordered by\ntime of arrival t1 < t2 < t3. First we apply a simple\ngeometry validation step. The measured time difference between\ntwo microphones cannot be larger than the sound\npropagation time between the two microphones:\n|ti \u2212 tj| <= |Pi \u2212 Pj |/c + \u03b5\nWhere c is the speed of sound and \u03b5 is the maximum\nmeasurement error. If this condition does not hold, the\ncorresponding detections are discarded. Let v(x, y, z) be the\nnormal vector of the unknown direction of arrival. We also\nuse r1(x1, y1, z1), the vector from P1 to P2 and r2(x2, y2, z2),\nthe vector from P1 to P3. Let\"s consider the projection of\nthe direction of the motion of the wave front (v) to r1\ndivided by the speed of sound (c). This gives us how long it\ntakes the wave front to propagate form P1 to P2:\nvr1 = c(t2 \u2212 t1)\nThe same relationship holds for r2 and v:\nvr2 = c(t3 \u2212 t1)\nWe also know that v is a normal vector:\nvv = 1\nMoving from vectors to coordinates using the dot product\ndefinition leads to a quadratic system:\nxx1 + yy1 + zz1 = c(t2 \u2212 t1)\nxx2 + yy2 + zz2 = c(t3 \u2212 t1)\nx2\n+ y2\n+ z2\n= 1\nWe omit the solution steps here, as they are\nstraightforward, but long. There are two solutions (if the source is on\nthe P1P2P3 plane the two solutions coincide). We use the\nfourth microphone\"s measurement-if there is one-to\neliminate one of them. Otherwise, both solutions are considered\nfor further processing.\n6.2 Muzzle-shock fusion\nu\nv\n11,tP\n22,tP\ntP,\n2P\u2032\nBullet trajectory\nFigure 7: Section plane of a shot (at P) and two\nsensors (at P1 and at P2). One sensor detects the\nmuzzle blast\"s, the other the shockwave\"s time and\ndirection of arrivals.\nConsider the situation in Figure 7. A shot was fired from\nP at time t. Both P and t are unknown. We have one muzzle\nblast and one shockwave detections by two different sensors\n118\nwith AoA and hence, ToA information available. The\nmuzzle blast detection is at position P1 with time t1 and AoA\nu. The shockwave detection is at P2 with time t2 and AoA\nv. u and v are normal vectors. It is shown below that these\nmeasurements are sufficient to compute the position of the\nshooter (P).\nLet P2 be the point on the extended shockwave cone\nsurface where PP2 is perpendicular to the surface. Note that\nPP2 is parallel with v. Since P2 is on the cone surface which\nhits P2, a sensor at P2 would detect the same shockwave\ntime of arrival (t2). The cone surface travels at the speed of\nsound (c), so we can express P using P2:\nP = P2 + cv(t2 \u2212 t).\nP can also be expressed from P1:\nP = P1 + cu(t1 \u2212 t)\nyielding\nP1 + cu(t1 \u2212 t) = P2 + cv(t2 \u2212 t).\nP2P2 is perpendicular to v:\n(P2 \u2212 P2)v = 0\nyielding\n(P1 + cu(t1 \u2212 t) \u2212 cv(t2 \u2212 t) \u2212 P2)v = 0\ncontaining only one unknown t. One obtains:\nt =\n(P1\u2212P2)v\nc\n+uvt1\u2212t2\nuv\u22121\n.\nFrom here we can calculate the shoter position P.\nLet\"s consider the special single sensor case where P1 = P2\n(one sensor detects both shockwave and muzzle blast AoA).\nIn this case:\nt = uvt1\u2212t2\nuv\u22121\n.\nSince u and v are not used separately only uv, the absolute\norientation of the sensor can be arbitrary, we still get t which\ngives us the range.\nHere we assumed that the shockwave is a cone which is\nonly true for constant projectile speeds. In reality, the angle\nof the cone slowly grows; the surface resembles one half of\nan American football. The decelerating bullet results in a\nsmaller time difference between the shockwave and the\nmuzzle blast detections because the shockwave generation slows\ndown with the bullet. A smaller time difference results in a\nsmaller range, so the above formula underestimates the true\nrange. However, it can still be used with a proper\ndeceleration correction function. We leave this for future work.\n6.3 Trajectory estimation\nDanicki showed that the bullet trajectory and speed can\nbe computed analytically from two independent shockwave\nmeasurements where both ToA and AoA are measured [2].\nThe method gets more sensitive to measurement errors as\nthe two shockwave directions get closer to each other. In\nthe special case when both directions are the same, the\ntrajectory cannot be computed. In a real world application,\nthe sensors are typically deployed on a plane approximately.\nIn this case, all sensors located on one side of the\ntrajectory measure almost the same shockwave AoA. To avoid\nthis error sensitivity problem, we consider shockwave\nmeasurement pairs only if the direction of arrival difference is\nlarger than a certain threshold.\nWe have multiple sensors and one sensor can report two\ndifferent directions (when only three microphones detect the\nshockwave). Hence, we typically have several trajectory\ncandidates, i.e. one for each AoA pair over the threshold. We\napplied an outlier filtering and averaging method to fuse\ntogether the shockwave direction and time information and\ncome up with a single trajectory. Assume that we have\nN individual shockwave AoA measurements. Let\"s take all\npossible unordered pairs where the direction difference is\nabove the mentioned threshold and compute the trajectory\nfor each. This gives us at most N(N\u22121)\n2\ntrajectories. A\ntrajectory is represented by one point pi and the normal vector\nvi (where i is the trajectory index). We define the distance\nof two trajectories as the dot product of their normal\nvectors:\nD(i, j) = vivj\nFor each trajectory a neighbor set is defined:\nN(i) := {j|D(i, j) < R}\nwhere R is a radius parameter. The largest neighbor set is\nconsidered to be the core set C, all other trajectories are\noutliers. The core set can be found in O(N2\n) time. The\ntrajectories in the core set are then averaged to get the final\ntrajectory.\nIt can happen that we cannot form any sensor pairs\nbecause of the direction difference threshold. It means all\nsensors are on the same side of the trajectory. In this case,\nwe first compute the shooter position (described in the next\nsection) that fixes p making v the only unknown. To find\nv in this case, we use a simple high resolution grid search\nand minimize an error function based on the shockwave\ndirections.\nWe have made experiments to utilize the measured\nshockwave length in the trajectory estimation. There are some\npromising results, but it needs further research.\n6.4 Shooter position estimation\nThe shooter position estimation algorithm aggregates the\nfollowing heterogenous information generated by earlier\ncomputational steps:\n1. trajectory,\n2. muzzle blast ToA at a sensor,\n3. muzzle blast AoA at a sensor, which is effectively a\nbearing estimate to the shooter, and\n4. range estimate at a sensor (when both shockwave and\nmuzzle blast AoA are available).\nSome sensors report only ToA, some has bearing\nestimate(s) also and some has range estimate(s) as well,\ndepending on the number of successful muzzle blast and shockwave\ndetections by the sensor. For an example, refer to Figure 8.\nNote that a sensor may have two different bearing and range\nestimates. 3 detections gives two possible AoA-s for\nmuzzle blast (i.e. bearing) and/or shockwave. Furthermore, the\ncombination of two different muzzle blast and shockwave\nAoA-s may result in two different ranges.\n119\n11111 ,,,, rrvvt \u2032\u2032\n22 ,vt\n333 ,, vvt \u2032\n4t\n5t\n6t\nbullet trajectory\nshooter position\nFigure 8: Example of heterogenous input data for\nthe shooter position estimation algorithm. All\nsensors have ToA measurements (t1, t2, t3, t4, t5), one\nsensor has a single bearing estimate (v2), one sensor has\ntwo possible bearings (v3, v3) and one sensor has two\nbearing and two range estimates (v1, v1,r1, r1)\nIn a multipath environment, these detections will not only\ncontain gaussian noise, but also possibly large errors due to\nechoes. It has been showed in our earlier work that a similar\nproblem can be solved efficiently with an interval arithmetic\nbased bisection search algorithm [8]. The basic idea is to\ndefine a discrete consistency function over the area of\ninterest and subdivide the space into 3D boxes. For any given\n3D box, this function gives the number of measurements\nsupporting the hypothesis that the shooter was within that\nbox. The search starts with a box large enough to contain\nthe whole area of interest, then zooms in by dividing and\nevaluating boxes. The box with the maximum consistency\nis divided until the desired precision is reached.\nBacktracking is possible to avoid getting stuck in a local maximum.\nThis approach has been shown to be fast enough for\nonline processing. Note, however, that when the trajectory\nhas already been calculated in previous steps, the search\nneeds to be done only on the trajectory making it orders of\nmagnitude faster.\nNext let us describe how the consistency function is\ncalculated in detail. Consider B, a three dimensional box, we\nwould like to compute the consistency value of. First we\nconsider only the ToA information. If one sensor has\nmultiple ToA detections, we use the average of those times, so\none sensor supplies at most one ToA estimate. For each\nToA, we can calculate the corresponding time of the shot,\nsince the origin is assumed to be in box B. Since it is a box\nand not a single point, this gives us an interval for the shot\ntime. The maximum number of overlapping time intervals\ngives us the value of the consistency function for B. For a\ndetailed description of the consistency function and search\nalgorithm, refer to [8].\nHere we extend the approach the following way. We\nmodify the consistency function based on the bearing and range\ndata from individual sensors. A bearing estimate supports\nB if the line segment starting from the sensor with the\nmeasured direction intersects the B box. A range supports B,\nif the sphere with the radius of the range and origin of the\nsensor intersects B. Instead of simply checking whether the\nposition specified by the corresponding bearing-range pairs\nfalls within B, this eliminates the sensor\"s possible\norientation error. The value of the consistency function is\nincremented by one for each bearing and range estimate that is\nconsistent with B.\n6.5 Caliber estimation\nThe shockwave signal characteristics has been studied\nbefore by Whitham [20]. He showed that the shockwave period\nT is related to the projectile diameter d, the length l, the\nperpendicular miss distance b from the bullet trajectory to\nthe sensor, the Mach number M and the speed of sound c.\nT = 1.82Mb1/4\nc(M2\u22121)3/8\nd\nl1/4 \u2248 1.82d\nc\n(Mb\nl\n)1/4\n0\n100\n200\n300\n400\n500\n600\n0 10 20 30\nmiss distance (m)shockwavelength(microseconds)\n.50 cal\n5.56 mm\n7.62 mm\nFigure 9: Shockwave length and miss distance\nrelationship. Each data point represents one\nsensorboard after an aggregation of the individual\nmeasurements of the four acoustic channels. Three\ndifferent caliber projectiles have been tested (196\nshots, 10 sensors).\nTo illustrate the relationship between miss distance and\nshockwave length, here we use all 196 shots with three\ndifferent caliber projectiles fired during the evaluation. (During\nthe evaluation we used data obtained previously using a few\npractice shots per weapon.) 10 sensors (4 microphones by\nsensor) measured the shockwave length. For each sensor,\nwe considered the shockwave length estimation valid if at\nleast three out of four microphones agreed on a value with\nat most 5 microsecond variance. This filtering leads to a\n86% report rate per sensor and gets rid of large\nmeasurement errors. The experimental data is shown in Figure 9.\nWhitham\"s formula suggests that the shockwave length for a\ngiven caliber can be approximated with a power function of\nthe miss distance (with a 1/4 exponent). Best fit functions\non our data are:\n.50 cal: T = 237.75b0.2059\n7.62 mm: T = 178.11b0.1996\n5.56 mm: T = 144.39b0.1757\nTo evaluate a shot, we take the caliber whose\napproximation function results in the smallest RMS error of the\nfiltered sensor readings. This method has less than 1%\ncaliber estimation error when an accurate trajectory estimate\nis available. In other words, caliber estimation only works\nif enough shockwave detections are made by the system to\ncompute a trajectory.\n120\n6.6 Weapon estimation\nWe analyzed all measured signal characteristics to find\nweapon specific information. Unfortunately, we concluded\nthat the observed muzzle blast signature is not characteristic\nenough of the weapon for classification purposes. The\nreflections of the high energy muzzle blast from the environment\nhave much higher impact on the muzzle blast signal shape\nthan the weapon itself. Shooting the same weapon from\ndifferent places caused larger differences on the recorded signal\nthan shooting different weapons from the same place.\n0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n0 100 200 300 400\nrange (m)\nspeed(m/s)\nAK-47\nM240\nFigure 10: AK47 and M240 bullet deceleration\nmeasurements. Both weapons have the same caliber.\nData is approximated using simple linear regression.\n0\n100\n200\n300\n400\n500\n600\n700\n800\n900\n1000\n0 50 100 150 200 250 300 350\nrange (m)\nspeed(m/s)\nM16\nM249\nM4\nFigure 11: M16, M249 and M4 bullet deceleration\nmeasurements. All weapons have the same caliber.\nData is approximated using simple linear regression.\nHowever, the measured speed of the projectile and its\ncaliber showed good correlation with the weapon type. This\nis because for a given weapon type and ammunition pair,\nthe muzzle velocity is nearly constant. In Figures 10 and\n11 we can see the relationship between the range and the\nmeasured bullet speed for different calibers and weapons.\nIn the supersonic speed range, the bullet deceleration can\nbe approximated with a linear function. In case of the\n7.62 mm caliber, the two tested weapons (AK47, M240) can\nbe clearly separated (Figure 10). Unfortunately, this is not\nnecessarily true for the 5.56 mm caliber. The M16 with its\nhigher muzzle speed can still be well classified, but the M4\nand M249 weapons seem practically undistinguishable\n(Figure 11). However, this may be partially due to the limited\nnumber of practice shots we were able to take before the\nactual testing began. More training data may reveal better\nseparation between the two weapons since their published\nmuzzle velocities do differ somewhat.\nThe system carries out weapon classification in the\nfollowing manner. Once the trajectory is known, the speed can be\ncalculated for each sensor based on the shockwave geometry.\nTo evaluate a shot, we choose the weapon type whose\ndeceleration function results in the smallest RMS error of the\nestimated range-speed pairs for the estimated caliber class.\n7. RESULTS\nAn independent evaluation of the system was carried out\nby a team from NIST at the US Army Aberdeen Test Center\nin April 2006 [19]. The experiment was setup on a shooting\nrange with mock-up wooden buildings and walls for\nsupporting elevated shooter positions and generating multipath\neffects. Figure 12 shows the user interface with an aerial\nphotograph of the site. 10 sensor nodes were deployed on\nsurveyed points in an approximately 30\u00d730 m area. There\nwere five fixed targets behind the sensor network. Several\nfiring positions were located at each of the firing lines at\n50, 100, 200 and 300 meters. These positions were known\nto the evaluators, but not to the operators of the system.\nSix different weapons were utilized: AK47 and M240\nfiring 7.62 mm projectiles, M16, M4 and M249 with 5.56mm\nammunition and the .50 caliber M107.\nNote that the sensors remained static during the test. The\nprimary reason for this is that nobody is allowed downrange\nduring live fire tests. Utilizing some kind of remote\ncontrol platform would have been too involved for the limited\ntime the range was available for the test. The experiment,\ntherefore, did not test the mobility aspect of the system.\nDuring the one day test, there were 196 shots fired. The\nresults are summarized in Table 1. The system detected all\nshots successfully. Since a ballistic shockwave is a unique\nacoustic phenomenon, it makes the detection very robust.\nThere were no false positives for shockwaves, but there were\na handful of false muzzle blast detections due to parallel\ntests of artillery at a nearby range.\nShooter Local- Caliber Trajectory Trajectory Distance No.\nRange ization Accu- Azimuth Distance Error of\n(m) Rate racy Error (deg) Error (m) (m) Shots\n50 93% 100% 0.86 0.91 2.2 54\n100 100% 100% 0.66 1.34 8.7 54\n200 96% 100% 0.74 2.71 32.8 54\n300 97% 97% 1.49 6.29 70.6 34\nAll 96% 99.5% 0.88 2.47 23.0 196\nTable 1: Summary of results fusing all available\nsensor observations. All shots were successfully\ndetected, so the detection rate is omitted. Localization\nrate means the percentage of shots that the sensor\nfusion was able to estimate the trajectory of. The\ncaliber accuracy rate is relative to the shots localized\nand not all the shots because caliber estimation\nrequires the trajectory. The trajectory error is broken\ndown to azimuth in degrees and the actual distance\nof the shooter from the trajectory. The distance\nerror shows the distance between the real shooter\nposition and the estimated shooter position. As such,\nit includes the error caused by both the trajectory\nand that of the range estimation. Note that the\ntraditional bearing and range measures are not good\nones for a distributed system such as ours because\nof the lack of a single reference point.\n121\nFigure 12: The user interface of the system\nshowing the experimental setup. The 10 sensor nodes\nare labeled by their ID and marked by dark circles.\nThe targets are black squares marked T-1 through\nT-5. The long white arrows point to the shooter\nposition estimated by each sensor. Where it is\nmissing, the corresponding sensor did not have enough\ndetections to measure the AoA of either the\nmuzzle blast, the shockwave or both. The thick black\nline and large circle indicate the estimated\ntrajectory and the shooter position as estimated by fusing\nall available detections from the network. This shot\nfrom the 100-meter line at target T-3 was localized\nalmost perfectly by the sensor network. The caliber\nand weapon were also identified correctly. 6 out of\n10 nodes were able to estimate the location alone.\nTheir bearing accuracy is within a degree, while the\nrange is off by less than 10% in the worst case.\nThe localization rate characterizes the system\"s ability to\nsuccessfully estimate the trajectory of shots. Since caliber\nestimation and weapon classification relies on the trajectory,\nnon-localized shots are not classified either. There were 7\nshots out of 196 that were not localized. The reason for\nmissed shots is the trajectory ambiguity problem that occurs\nwhen the projectile passes on one side of all the sensors. In\nthis case, two significantly different trajectories can generate\nthe same set of observations (see [8] and also Section 6.3).\nInstead of estimating which one is more likely or displaying\nboth possibilities, we decided not to provide a trajectory at\nall. It is better not to give an answer other than a shot\nalarm than misleading the soldier.\nLocalization accuracy is broken down to trajectory\naccuracy and range estimation precision. The angle of the\nestimated trajectory was better than 1 degree except for the\n300 m range. Since the range should not affect trajectory\nestimation as long as the projectile passes over the network,\nwe suspect that the slightly worse angle precision for 300 m\nis due to the hurried shots we witnessed the soldiers took\nnear the end of the day. This is also indicated by another\ndatapoint: the estimated trajectory distance from the\nactual targets has an average error of 1.3 m for 300 m shots,\n0.75 m for 200 m shots and 0.6 m for all but 300 m shots.\nAs the distance between the targets and the sensor network\nwas fixed, this number should not show a 2\u00d7 improvement\njust because the shooter is closer.\nSince the angle of the trajectory itself does not\ncharacterize the overall error-there can be a translation\nalsoTable 1 also gives the distance of the shooter from the\nestimated trajectory. These indicate an error which is about\n1-2% of the range. To put this into perspective, a\ntrajectory estimate for a 100 m shot will very likely go through or\nvery near the window the shooter is located at. Again, we\nbelieve that the disproportionally larger errors at 300 m are\ndue to human errors in aiming. As the ground truth was\nobtained by knowing the precise location of the shooter and\nthe target, any inaccuracy in the actual trajectory directly\nadds to the perceived error of the system.\nWe call the estimation of the shooter\"s position on the\ncalculated trajectory range estimation due to the lack of a\nbetter term. The range estimates are better than 5%\naccurate from 50 m and 10% for 100 m. However, this goes\nto 20% or worse for longer distances. We did not have a\nfacility to test system before the evaluation for ranges\nbeyond 100 m. During the evaluation, we ran into the\nproblem of mistaking shockwave echoes for muzzle blasts. These\nechoes reached the sensors before the real muzzle blast for\nlong range shots only, since the projectile travels 2-3\u00d7 faster\nthan the speed of sound, so the time between the shockwave\n(and its possible echo from nearby objects) and the muzzle\nblast increases with increasing ranges. This resulted in\nunderestimating the range, since the system measured shorter\ntimes than the real ones. Since the evaluation we finetuned\nthe muzzle blast detection algorithm to avoid this problem.\nDistance M16 AK47 M240 M107 M4 M249 M4-M249\n50m 100% 100% 100% 100% 11% 25% 94%\n100m 100% 100% 100% 100% 22% 33% 100%\n200m 100% 100% 100% 100% 50% 22% 100%\n300m 67% 100% 83% 100% 33% 0% 57%\nAll 96% 100% 97% 100% 23% 23% 93%\nTable 2: Weapon classification results. The\npercentages are relative to the number of shots localized and\nnot all shots, as the classification algorithm needs to\nknow the trajectory and the range. Note that the\ndifference is small; there were 189 shots localized\nout of the total 196.\nThe caliber and weapon estimation accuracy rates are\nbased on the 189 shots that were successfully localized. Note\nthat there was a single shot that was falsely classified by the\ncaliber estimator. The 73% overall weapon classification\naccuracy does not seem impressive. But if we break it down\nto the six different weapons tested, the picture changes\ndramatically as shown in Table 2. For four of the weapons\n(AK14, M16, M240 and M107), the classification rate is\nalmost 100%. There were only two shots out of approximately\n140 that were missed. The M4 and M249 proved to be too\nsimilar and they were mistaken for each other most of the\ntime. One possible explanation is that we had only a limited\nnumber of test shots taken with these weapons right before\nthe evaluation and used the wrong deceleration\napproximation function. Either this or a similar mistake was made\n122\nsince if we simply used the opposite of the system\"s answer\nwhere one of these weapons were indicated, the accuracy\nwould have improved 3x. If we consider these two weapons\na single weapon class, then the classification accuracy for it\nbecomes 93%.\nNote that the AK47 and M240 have the same caliber\n(7.62 mm), just as the M16, M4 and M249 do (5.56 mm).\nThat is, the system is able to differentiate between weapons\nof the same caliber. We are not aware of any system that\nclassifies weapons this accurately.\n7.1 Single sensor performance\nAs was shown previously, a single sensor alone is able\nto localize the shooter if it can determine both the muzzle\nblast and the shockwave AoA, that is, it needs to measure\nthe ToA of both on at least three acoustic channels. While\nshockwave detection is independent of the range-unless the\nprojectile becomes subsonic-, the likelihood of muzzle blast\ndetection beyond 150 meters is not enough for consistently\ngetting at least three per sensor node for AoA estimation.\nHence, we only evaluate the single sensor performance for\nthe 104 shots that were taken from 50 and 100 m. Note that\nwe use the same test data as in the previous section, but we\nevaluate individually for each sensor.\nTable 3 summarizes the results broken down by the ten\nsensors utilized. Since this is now not a distributed system,\nthe results are given relative to the position of the given\nsensor, that is, a bearing and range estimate is provided. Note\nthat many of the common error sources of the networked\nsystem do not play a role here. Time synchronization is\nnot applicable. The sensor\"s absolute location is irrelevant\n(just as the relative location of multiple sensors). The\nsensor\"s orientation is still important though. There are several\ndisadvantages of the single sensor case compared to the\nnetworked system: there is no redundancy to compensate for\nother errors and to perform outlier rejection, the\nlocalization rate is markedly lower, and a single sensor alone is not\nable to estimate the caliber or classify the weapon.\nSensor id 1 2 3 5 7 8 9 10 11 12\nLoc. rate 44% 37% 53% 52% 19% 63% 51% 31% 23% 44%\nBearing (deg) 0.80 1.25 0.60 0.85 1.02 0.92 0.73 0.71 1.28 1.44\nRange (m) 3.2 6.1 4.4 4.7 4.6 4.6 4.1 5.2 4.8 8.2\nTable 3: Single sensor accuracy for 108 shots fired\nfrom 50 and 100 meters. Localization rate refers to\nthe percentage of shots the given sensor alone was\nable to localize. The bearing and range values are\naverage errors. They characterize the accuracy of\nlocalization from the given sensor\"s perspective.\nThe data indicates that the performance of the sensors\nvaried significantly especially considering the localization\nrate. One factor has to be the location of the given\nsensor including how far it was from the firing lines and how\nobstructed its view was. Also, the sensors were hand-built\nprototypes utilizing nowhere near production quality\npackaging/mounting. In light of these factors, the overall\naverage bearing error of 0.9 degrees and range error of 5 m\nwith a microphone spacing of less than 10 cm are excellent.\nWe believe that professional manufacturing and better\nmicrophones could easily achieve better performance than the\nbest sensor in our experiment (>60% localization rate and\n3 m range error).\nInterestingly, the largest error in range was a huge 90 m\nclearly due to some erroneous detection, yet the largest\nbearing error was less than 12 degrees which is still a good\nindication for the soldier where to look.\nThe overall localization rate over all single sensors was\n42%, while for 50 m shots only, this jumped to 61%. Note\nthat the firing range was prepared to simulate an urban\narea to some extent: there were a few single- and two-storey\nwooden structures built both in and around the sensor\ndeployment area and the firing lines. Hence, not all sensors had\nline-of-sight to all shooting positions. We estimate that 10%\nof the sensors had obstructed view to the shooter on\naverage. Hence, we can claim that a given sensor had about 50%\nchance of localizing a shot within 130 m. (Since the sensor\ndeployment area was 30 m deep, 100 m shots correspond to\nactual distances between 100 and 130 m.) Again, we\nemphasize that localization needs at least three muzzle blast and\nthree shockwave detections out of a possible four for each per\nsensor. The detection rate for single sensors-corresponding\nto at least one shockwave detection per sensor-was\npractically 100%.\n0%\n10%\n20%\n30%\n40%\n50%\n60%\n70%\n80%\n90%\n100%\n0 1 2 3 4 5 6 7 8 9 10\nnumber of sensors\npercentageofshots\nFigure 13: Histogram showing what fraction of the\n104 shots taken from 50 and 100 meters were\nlocalized by at most how many individual sensors alone.\n13% of the shots were missed by every single\nsensor, i.e., none of them had both muzzle blast and\nshockwave AoA detections. Note that almost all\nof these shots were still accurately localized by the\nnetworked system, i.e. the sensor fusion using all\navailable observations in the sensor network.\nIt would be misleading to interpret these results as the\nsystem missing half the shots. As soldiers never work alone\nand the sensor node is relatively cheap to afford having\nevery soldier equipped with one, we also need to look at the\noverall detection rates for every shot. Figure 13 shows the\nhistogram of the percentage of shots vs. the number of\nindividual sensors that localized it. 13% of shots were not\nlocalized by any sensor alone, but 87% was localized by at\nleast one sensor out of ten.\n7.2 Error sources\nIn this section, we analyze the most significant sources of\nerror that affect the performance of the networked shooter\nlocalization and weapon classification system. In order to\ncorrelate the distributed observations of the acoustic events,\nthe nodes need to have a common time and space reference.\nHence, errors in the time synchronization, node localization\nand node orientation all degrade the overall accuracy of the\nsystem.\n123\nOur time synchronization approach yields errors\nsignificantly less than 100 microseconds. As the sound travels\nabout 3 cm in that time, time synchronization errors have a\nnegligible effect on the system.\nOn the other hand, node location and orientation can have\na direct effect on the overall system performance. Notice\nthat to analyze this, we do not have to resort to\nsimulation, instead we can utilize the real test data gathered at\nAberdeen. But instead of using the real sensor locations\nknown very accurately and the measured and calibrated\nalmost perfect node orientations, we can add error terms to\nthem and run the sensor fusion. This exactly replicates how\nthe system would have performed during the test using the\nimprecisely known locations and orientations.\nAnother aspect of the system performance that can be\nevaluated this way is the effect of the number of available\nsensors. Instead of using all ten sensors in the data fusion,\nwe can pick any subset of the nodes to see how the accuracy\ndegrades as we decrease the number of nodes.\nThe following experiment was carried out. The number\nof sensors were varied from 2 to 10 in increments of 2. Each\nrun picked the sensors randomly using a uniform\ndistribution. At each run each node was randomly moved to a\nnew location within a circle around its true position with\na radius determined by a zero-mean Gaussian distribution.\nFinally, the node orientations were perturbed using a\nzeromean Gaussian distribution. Each combination of\nparameters were generated 100 times and utilized for all 196 shots.\nThe results are summarized in Figure 14. There is one 3D\nbarchart for each of the experiment sets with the given fixed\nnumber of sensors. The x-axis shows the node location error,\nthat is, the standard deviation of the corresponding\nGaussian distribution that was varied between 0 and 6 meters.\nThe y-axis shows the standard deviation of the node\norientation error that was varied between 0 and 6 degrees. The\nz-axis is the resulting trajectory azimuth error. Note that\nthe elevation angles showed somewhat larger errors than the\nazimuth. Since all the sensors were in approximately a\nhorizontal plane and only a few shooter positions were out of the\nsame plane and only by 2 m or so, the test was not sufficient\nto evaluate this aspect of the system.\nThere are many interesting observation one can make by\nanalyzing these charts. Node location errors in this range\nhave a small effect on accuracy. Node orientation errors, on\nthe other hand, noticeably degrade the performance. Still\nthe largest errors in this experiment of 3.5 degrees for 6\nsensors and 5 degrees for 2 sensors are still very good.\nNote that as the location and orientation errors increase\nand the number of sensors decrease, the most significantly\naffected performance metric is the localization rate. See\nTable 4 for a summary. Successful localization goes down\nfrom almost 100% to 50% when we go from 10 sensors to\n2 even without additional errors. This is primarily caused\nby geometry: for a successful localization, the bullet needs\nto pass over the sensor network, that is, at least one sensor\nshould be on the side of the trajectory other than the rest\nof the nodes. (This is a simplification for illustrative\npurposes. If all the sensors and the trajectory are not coplanar,\nlocalization may be successful even if the projectile passes\non one side of the network. See Section 6.3.) As the\nnumbers of sensors decreased in the experiment by randomly\nselecting a subset, the probability of trajectories abiding by\nthis rule decreased. This also means that even if there are\n0\n2\n4\n6\n0\n2\n4\n6\n0\n1\n2\n3\n4\n5\n6\nazimutherror(degree)\nposition error (m)\norientation error\n(degree)\n2 sensors\n0\n2\n4\n6\n0\n2\n4\n6\n0\n1\n2\n3\n4\n5\n6\nazimutherror(degree)\nposition error (m)\norientation error\n(degree)\n4 sensors\n0\n2\n4\n6\n0\n2\n4\n6\n0\n1\n2\n3\n4\n5\n6\nazimutherror(degree)\nposition error (m)\norientation error\n(degree)\n6 sensors\n0\n2\n4\n6\n0\n2\n4\n6\n0\n1\n2\n3\n4\n5\n6\nazimutherror(degree)\nposition error (m)\norientation error\n(degree)\n8 sensors\nFigure 14: The effect of node localization and\norientation errors on azimuth accuracy with 2, 4, 6 and\n8 nodes. Note that the chart for 10 nodes is almost\nidentical for the 8-node case, hence, it is omitted.\n124\nmany sensors (i.e. soldiers), but all of them are right next to\neach other, the localization rate will suffer. However, when\nthe sensor fusion does provide a result, it is still accurate\neven with few available sensors and relatively large\nindividual errors. A very few consistent observation lead to good\naccuracy as the inconsistent ones are discarded by the\nalgorithm. This is also supported by the observation that for\nthe cases with the higher number of sensors (8 or 10), the\nlocalization rate is hardly affected by even large errors.\nErrors/Sensors 2 4 6 8 10\n0 m, 0 deg 54% 87% 94% 95% 96%\n2 m, 2 deg 53% 80% 91% 96% 96%\n6 m, 0 deg 43% 79% 88% 94% 94%\n0 m, 6 deg 44% 78% 90% 93% 94%\n6 m, 6 deg 41% 73% 85% 89% 92%\nTable 4: Localization rate as a function of the\nnumber of sensors used, the sensor node location and\norientation errors.\nOne of the most significant observations on Figure 14 and\nTable 4 is that there is hardly any difference in the data for\n6, 8 and 10 sensors. This means that there is little advantage\nof adding more nodes beyond 6 sensors as far as the accuracy\nis concerned.\nThe speed of sound depends on the ambient temperature.\nThe current prototype considers it constant that is typically\nset before a test. It would be straightforward to employ\na temperature sensor to update the value of the speed of\nsound periodically during operation. Note also that wind\nmay adversely affect the accuracy of the system. The sensor\nfusion, however, could incorporate wind speed into its\ncalculations. It would be more complicated than temperature\ncompensation, but could be done.\nOther practical issues also need to be looked at before a\nreal world deployment. Silencers reduce the muzzle blast\nenergy and hence, the effective range the system can\ndetect it at. However, silencers do not effect the shockwave\nand the system would still detect the trajectory and caliber\naccurately. The range and weapon type could not be\nestimated without muzzle blast detections. Subsonic weapons\ndo not produce a shockwave. However, this is not of great\nsignificance, since they have shorter range, lower accuracy\nand much less lethality. Hence, their use is not widespread\nand they pose less danger in any case.\nAnother issue is the type of ammunition used. Irregular\narmies may use substandard, even hand manufactured\nbullets. This effects the muzzle velocity of the weapon. For\nweapon classification to work accurately, the system would\nneed to be calibrated with the typical ammunition used by\nthe given adversary.\n8. RELATED WORK\nAcoustic detection and recognition has been under\nresearch since the early fifties. The area has a close\nrelevance to the topic of supersonic flow mechanics [20]. Fansler\nanalyzed the complex near-field pressure waves that occur\nwithin a foot of the muzzle blast. Fansler\"s work gives a\ngood idea of the ideal muzzle blast pressure wave without\ncontamination from echoes or propagation effects [4].\nExperiments with greater distances from the muzzle were\nconducted by Stoughton [18]. The measurements of the ballistic\nshockwaves using calibrated pressure transducers at known\nlocations, measured bullet speeds, and miss distances of\n355 meters for 5.56 mm and 7.62 mm projectiles were made.\nResults indicate that ground interaction becomes a problem\nfor miss distances of 30 meters or larger.\nAnother area of research is the signal processing of gunfire\nacoustics. The focus is on the robust detection and length\nestimation of small caliber acoustic shockwaves and\nmuzzle blasts. Possible techniques for classifying signals as\neither shockwaves or muzzle blasts includes short-time Fourier\nTransform (STFT), the Smoothed Pseudo Wigner-Ville\ndistribution (SPWVD), and a discrete wavelet transformation\n(DWT). Joint time-frequency (JTF) spectrograms are used\nto analyze the typical separation of the shockwave and\nmuzzle blast transients in both time and frequency. Mays\nconcludes that the DWT is the best method for classifying\nsignals as either shockwaves or muzzle blasts because it works\nwell and is less expensive to compute than the SPWVD [10].\nThe edges of the shockwave are typically well defined and\nthe shockwave length is directly related to the bullet\ncharacteristics. A paper by Sadler [14] compares two shockwave\nedge detection methods: a simple gradient-based detector,\nand a multi-scale wavelet detector. It also demonstrates how\nthe length of the shockwave, as determined by the edge\ndetectors, can be used along with Whithams equations [20] to\nestimate the caliber of a projectile. Note that the available\ncomputational performance on the sensor nodes, the limited\nwireless bandwidth and real-time requirements render these\napproaches infeasible on our platform.\nA related topic is the research and development of\nexperimental and prototype shooter location systems. Researchers\nat BBN have developed the Bullet Ears system [3] which has\nthe capability to be installed in a fixed position or worn by\nsoldiers. The fixed system has tetrahedron shaped\nmicrophone arrays with 1.5 meter spacing. The overall system\nconsists of two to three of these arrays spaced 20 to 100\nmeters from each other. The soldier-worn system has 12\nmicrophones as well as a GPS antenna and orientation\nsensors mounted on a helmet. There is a low speed RF\nconnection from the helmet to the processing body. An extensive\ntest has been conducted to measure the performance of both\ntype of systems. The fixed systems performance was one\norder of magnitude better in the angle calculations while their\nrange performance where matched. The angle accuracy of\nthe fixed system was dominantly less than one degree while\nit was around five degrees for the helmet mounted one. The\nrange accuracy was around 5 percent for both of the\nsystems. The problem with this and similar centralized\nsystems is the need of the one or handful of microphone arrays\nto be in line-of-sight of the shooter. A sensor networked\nbased solution has the advantage of widely distributed\nsensing for better coverage, multipath effect compensation and\nmultiple simultaneous shot resolution [8]. This is especially\nimportant for operation in acoustically reverberant urban\nareas. Note that BBN\"s current vehicle-mounted system\ncalled BOOMERANG, a modified version of Bullet Ears,\nis currently used in Iraq [1].\nThe company ShotSpotter specializes in law enforcement\nsystems that report the location of gunfire to police within\nseconds. The goal of the system is significantly different\nthan that of military systems. Shotspotter reports 25 m\ntypical accuracy which is more than enough for police to\n125\nrespond. They are also manufacturing experimental soldier\nwearable and UAV mounted systems for military use [16],\nbut no specifications or evaluation results are publicly\navailable.\n9. CONCLUSIONS\nThe main contribution of this work is twofold. First, the\nperformance of the overall distributed networked system is\nexcellent. Most noteworthy are the trajectory accuracy of\none degree, the correct caliber estimation rate of well over\n90% and the close to 100% weapon classification rate for 4 of\nthe 6 weapons tested. The system proved to be very robust\nwhen increasing the node location and orientation errors and\ndecreasing the number of available sensors all the way down\nto a couple. The key factor behind this is the sensor fusion\nalgorithm\"s ability to reject erroneous measurements. It is\nalso worth mentioning that the results presented here\ncorrespond to the first and only test of the system beyond 100 m\nand with six different weapons. We believe that with the\nlessons learned in the test, a consecutive field experiment\ncould have showed significantly improved results especially\nin range estimation beyond 100 m and weapon classification\nfor the remaining two weapons that were mistaken for each\nother the majority of the times during the test.\nSecond, the performance of the system when used in\nstandalone mode, that is, when single sensors alone provided\nlocalization, was also very good. While the overall\nlocalization rate of 42% per sensor for shots up to 130 m could be\nimproved, the bearing accuracy of less than a degree and\nthe average 5% range error are remarkable using the\nhandmade prototypes of the low-cost nodes. Note that 87% of\nthe shots were successfully localized by at least one of the\nten sensors utilized in standalone mode.\nWe believe that the technology is mature enough that\na next revision of the system could be a commercial one.\nHowever, important aspects of the system would still need\nto be worked on. We have not addresses power\nmanagement yet. A current node runs on 4 AA batteries for about\n12 hours of continuous operation. A deployable version of\nthe sensor node would need to be asleep during normal\noperation and only wake up when an interesting event occurs.\nAn analog trigger circuit could solve this problem, however,\nthe system would miss the first shot. Instead, the acoustic\nchannels would need to be sampled and stored in a circular\nbuffer. The rest of the board could be turned off. When\na trigger wakes up the board, the acoustic data would be\nimmediately available. Experiments with a previous\ngeneration sensor board indicated that this could provide a 10x\nincrease in battery life. Other outstanding issues include\nweatherproof packaging and ruggedization, as well as\nintegration with current military infrastructure.\n10. REFERENCES\n[1] BBN technologies website. http://www.bbn.com.\n[2] E. Danicki. Acoustic sniper localization. Archives of\nAcoustics, 30(2):233-245, 2005.\n[3] G. L. Duckworth et al. Fixed and wearable acoustic\ncounter-sniper systems for law enforcement. In E. M.\nCarapezza and D. B. Law, editors, Proc. SPIE Vol.\n3577, p. 210-230, pages 210-230, Jan. 1999.\n[4] K. Fansler. Description of muzzle blast by modified\nscaling models. Shock and Vibration, 5(1):1-12, 1998.\n[5] D. Gay, P. Levis, R. von Behren, M. Welsh,\nE. Brewer, and D. Culler. The nesC language: a\nholistic approach to networked embedded systems.\nProceedings of Programming Language Design and\nImplementation (PLDI), June 2003.\n[6] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, and\nK. Pister. System architecture directions for networked\nsensors. in Proc. of ASPLOS 2000, Nov. 2000.\n[7] B. Kus\u00b4y, G. Balogh, P. V\u00a8olgyesi, J. Sallai, A. N\u00b4adas,\nA. L\u00b4edeczi, M. Mar\u00b4oti, and L. Meertens. Node-density\nindependent localization. Information Processing in\nSensor Networks (IPSN 06) SPOTS Track, Apr. 2006.\n[8] A. L\u00b4edeczi, A. N\u00b4adas, P. V\u00a8olgyesi, G. Balogh,\nB. Kus\u00b4y, J. Sallai, G. Pap, S. D\u00b4ora, K. Moln\u00b4ar,\nM. Mar\u00b4oti, and G. Simon. Countersniper system for\nurban warfare. ACM Transactions on Sensor\nNetworks, 1(1):153-177, Nov. 2005.\n[9] M. Mar\u00b4oti. Directed flood-routing framework for\nwireless sensor networks. In Proceedings of the 5th\nACM/IFIP/USENIX International Conference on\nMiddleware, pages 99-114, New York, NY, USA, 2004.\nSpringer-Verlag New York, Inc.\n[10] B. Mays. Shockwave and muzzle blast classification\nvia joint time frequency and wavelet analysis.\nTechnical report, Army Research Lab Adelphi MD\n20783-1197, Sept. 2001.\n[11] TinyOS Hardware Platforms.\nhttp://tinyos.net/scoop/special/hardware.\n[12] Crossbow MICAz (MPR2400) Radio Module.\nhttp://www.xbow.com/Products/productsdetails.\naspx?sid=101.\n[13] PicoBlaze User Resources.\nhttp://www.xilinx.com/ipcenter/processor_\ncentral/picoblaze/picoblaze_user_resources.htm.\n[14] B. M. Sadler, T. Pham, and L. C. Sadler. Optimal\nand wavelet-based shock wave detection and\nestimation. Acoustical Society of America Journal,\n104:955-963, Aug. 1998.\n[15] J. Sallai, B. Kus\u00b4y, A. L\u00b4edeczi, and P. Dutta. On the\nscalability of routing-integrated time synchronization.\n3rd European Workshop on Wireless Sensor Networks\n(EWSN 2006), Feb. 2006.\n[16] ShotSpotter website. http:\n//www.shotspotter.com/products/military.html.\n[17] G. Simon, M. Mar\u00b4oti, A. L\u00b4edeczi, G. Balogh, B. Kus\u00b4y,\nA. N\u00b4adas, G. Pap, J. Sallai, and K. Frampton. Sensor\nnetwork-based countersniper system. In SenSys \"04:\nProceedings of the 2nd international conference on\nEmbedded networked sensor systems, pages 1-12, New\nYork, NY, USA, 2004. ACM Press.\n[18] R. Stoughton. Measurements of small-caliber ballistic\nshock waves in air. Acoustical Society of America\nJournal, 102:781-787, Aug. 1997.\n[19] B. A. Weiss, C. Schlenoff, M. Shneier, and A. Virts.\nTechnology evaluations and performance metrics for\nsoldier-worn sensors for assist. In Performance Metrics\nfor Intelligent Systems Workshop, Aug. 2006.\n[20] G. Whitham. Flow pattern of a supersonic projectile.\nCommunications on pure and applied mathematics,\n5(3):301, 1952.\n126", "keywords": "1degree trajectory precision;weapon type;weapon classification;range;sensorboard;trajectory;internode communication;datum fusion;acoustic source localization;sensor network;caliber estimation accuracy;helmetmounted microphone array;caliber;caliber estimation;wireless sensor network-based mobile countersniper system;self orientation"} {"name": "train_C-66", "title": "Heuristics-Based Scheduling of Composite Web Service Workloads", "abstract": "Web services can be aggregated to create composite workflows that provide streamlined functionality for human users or other systems. Although industry standards and recent research have sought to define best practices and to improve end-to-end workflow composition, one area that has not fully been explored is the scheduling of a workflow\"s web service requests to actual service provisioning in a multi-tiered, multi-organisation environment. This issue is relevant to modern business scenarios where business processes within a workflow must complete within QoS-defined limits. Because these business processes are web service consumers, service requests must be mapped and scheduled across multiple web service providers, each with its own negotiated service level agreement. In this paper we provide heuristics for scheduling service requests from multiple business process workflows to web service providers such that a business value metric across all workflows is maximised. We show that a genetic search algorithm is appropriate to perform this scheduling, and through experimentation we show that our algorithm scales well up to a thousand workflows and produces better mappings than traditional approaches.", "fulltext": "1. INTRODUCTION\nWeb services can be composed into workflows to provide\nstreamlined end-to-end functionality for human users or other systems.\nAlthough previous research efforts have looked at ways to\nintelligently automate the composition of web services into workflows\n(e.g. [1, 9]), an important remaining problem is the assignment of\nweb service requests to the underlying web service providers in a\nmulti-tiered runtime scenario within constraints. In this paper we\naddress this scheduling problem and examine means to manage a\nlarge number of business process workflows in a scalable manner.\nThe problem of scheduling web service requests to providers is\nrelevant to modern business domains that depend on multi-tiered\nservice provisioning. Consider the example shown in Figure 1\nthat illustrates our problem space. Workflows comprise multiple\nrelated business processes that are web service consumers; here we\nassume that the workflows represent requested service from\ncustomers or automated systems and that the workflow has already\nbeen composed with an existing choreography toolkit. These\nworkflows are then submitted to a portal (not shown) that acts as a\nscheduling agent between the web service consumers and the web\nservice providers.\nIn this example, a workflow could represent the actions needed to\ninstantiate a vacation itinerary, where one business process requests\nbooking an airline ticket, another business process requests a hotel\nroom, and so forth. Each of these requests target a particular service\ntype (e.g. airline reservations, hotel reservations, car reservations,\netc.), and for each service type, there are multiple instances of\nservice providers that publish a web service interface. An important\nchallenge is that the workflows must meet some quality-of-service\n(QoS) metric, such as end-to-end completion time of all its business\nprocesses, and that meeting or failing this goal results in the\nassignment of a quantitative business value metric for the workflow;\nintuitively, it is desired that all workflows meet their respective QoS\ngoals. We further leverage the notion that QoS service agreements\nare generally agreed-upon between the web service providers and\nthe scheduling agent such that the providers advertise some level\nof guaranteed QoS to the scheduler based upon runtime conditions\nsuch as turnaround time and maximum available concurrency. The\nresulting problem is then to schedule and assign the business\nprocesses\" requests for service types to one of the service providers\nfor that type. The scheduling must be done such that the aggregate\nbusiness value across all the workflows is maximised.\nIn Section 3 we state the scenario as a combinatorial problem and\nutilise a genetic search algorithm [5] to find the best assignment\nof web service requests to providers. This approach converges\ntowards an assignment that maximises the overall business value for\nall the workflows.\nIn Section 4 we show through experimentation that this search\nheuristic finds better assignments than other algorithms (greedy,\nround-robin, and proportional). Further, this approach allows us to\nscale the number of simultaneous workflows (up to one thousand\nworkflows in our experiments) and yet still find effective schedules.\n2. RELATED WORK\nIn the context of service assignment and scheduling, [11] maps\nweb service calls to potential servers using linear programming, but\ntheir work is concerned with mapping only single workflows; our\nprincipal focus is on scalably scheduling multiple workflows (up\n30\nService Type\nSuperHotels.com\nBusiness\nProcess\nBusiness\nProcess\nWorkflow\n...\nBusiness\nProcess\nBusiness\nProcess\n...\nHostileHostels.com\nIncredibleInns.com\nBusiness\nProcess\nBusiness\nProcess\nBusiness\nProcess\n...\nBusiness\nProcess\nService\nProvider\nSkyHighAirlines.com\nSuperCrazyFlights.com\nBusiness\nProcess\n.\n.\n.\n.\n.\n.\nAdvertised QoS\nService Agreement\nCarRentalService.com\nFigure 1: An example scenario demonstrating the interaction between business processes in workflows and web service providers.\nEach business process accesses a service type and is then mapped to a service provider for that type.\nto one thousand as we show later) using different business\nmetrics and a search heuristic. [10] presents a dynamic\nprovisioning approach that uses both predictive and reactive techniques for\nmulti-tiered Internet application delivery. However, the\nprovisioning techniques do not consider the challenges faced when there are\nalternative query execution plans and replicated data sources. [8]\npresents a feedback-based scheduling mechanism for multi-tiered\nsystems with back-end databases, but unlike our work, it assumes\na tighter coupling between the various components of the system.\nOur work also builds upon prior scheduling research. The classic\njob-shop scheduling problem, shown to be NP-complete [4] [3], is\nsimilar to ours in that tasks within a job must be scheduled onto\nmachinery (c.f. our scenario is that business processes within a\nworkflow must be scheduled onto web service providers). The salient\ndifferences are that the machines can process only one job at a time\n(we assume servers can multi-task but with degraded performance\nand a maximum concurrency level), tasks within a job cannot\nsimultaneously run on different machines (we assume business\nprocesses can be assigned to any available server), and the principal\nmetric of performance is the makespan, which is the time for the\nlast task among all the jobs to complete (and as we show later,\noptimising on the makespan is insufficient for scheduling the business\nprocesses, necessitating different metrics).\n3. DESIGN\nIn this section we describe our model and discuss how we can\nfind scheduling assignments using a genetic search algorithm.\n3.1 Model\nWe base our model on the simplified scenario shown in Figure\n1. Specifically, we assume that users or automated systems request\nthe execution of a workflow. The workflows comprise business\nprocesses, each of which makes one web service invocation to a\nservice type. Further, business processes have an ordering in the\nworkflow. The arrangement and execution of the business processes and\nthe data flow between them are all managed by a composition or\nchoreography tool (e.g. [1, 9]). Although composition languages\ncan use sophisticated flow-control mechanisms such as conditional\nbranches, for simplicity we assume the processes execute\nsequentially in a given order.\nThis scenario can be naturally extended to more complex\nrelationships that can be expressed in BPEL [7], which defines how\nbusiness processes interact, messages are exchanged, activities are\nordered, and exceptions are handled. Due to space constraints,\nwe focus on the problem space presented here and will extend our\nmodel to more advanced deployment scenarios in the future.\nEach workflow has a QoS requirement to complete within a\nspecified number of time units (e.g. on the order of seconds, as\ndetailed in the Experiments section). Upon completion (or failure),\nthe workflow is assigned a business value. We extended this\napproach further and considered different types of workflow\ncompletion in order to model differentiated QoS levels that can be applied\nby businesses (for example, to provide tiered customer service).\nWe say that a workflow is successful if it completes within its QoS\nrequirement, acceptable if it completes within a constant factor \u03ba\n31\nof its QoS bound (in our experiments we chose \u03ba=3), or failing\nif it finishes beyond \u03ba times its QoS bound. For each category,\na business value score is assigned to the workflow, with the\nsuccessful category assigned the highest positive score, followed by\nacceptable and then failing. The business value point\ndistribution is non-uniform across workflows, further modelling cases\nwhere some workflows are of higher priority than others.\nEach service type is implemented by a number of different\nservice providers. We assume that the providers make service level\nagreements (SLAs) to guarantee a level of performance defined by\nthe completion time for completing a web service invocation.\nAlthough SLAs can be complex, in this paper we assume for\nsimplicity that the guarantees can take the form of a linear performance\ndegradation under load. This guarantee is defined by several\nparameters: \u03b1 is the expected completion time (for example, on the\norder of seconds) if the assigned workload of web service requests\nis less than or equal to \u03b2, the maximum concurrency, and if the\nworkload is higher than \u03b2, the expected completion for a workload\nof size \u03c9 is \u03b1+ \u03b3(\u03c9 \u2212 \u03b2) where \u03b3 is a fractional coefficient. In our\nexperiments we vary \u03b1, \u03b2, and \u03b3 with different distributions.\nIdeally, all workflows would be able to finish within their QoS\nlimits and thus maximise the aggregate business value across all\nworkflows. However, because we model service providers with\ndegrading performance under load, not all workflows will achieve\ntheir QoS limit: it may easily be the case that business processes\nare assigned to providers who are overloaded and cannot complete\nwithin the respective workflow\"s QoS limit. The key research\nproblem, then, is to assign the business processes to the web service\nproviders with the goal of optimising on the aggregate business\nvalue of all workflows.\nGiven that the scope of the optimisation is the entire set of\nworkflows, it may be that the best scheduling assignments may result in\nsome workflows having to fail in order for more workflows to\nsucceed. This intuitive observation suggests that traditional scheduling\napproaches such as round-robin or proportional assignments will\nnot fare well, which is what we observe and discuss in Section 4.\nOn the other hand, an exhaustive search of all the possible\nassignments will find the best schedule, but the computational complexity\nis prohibitively high. Suppose there are W workflows with an\naverage of B business processes per workflow. Further, in the worst\ncase each business process requests one service type, for which\nthere are P providers. There are thus W \u00b7 PB\ncombinations to\nexplore to find the optimal assignments of business processes to\nproviders. Even for small configurations (e.g. W =10, B=5, P=10),\nthe computational time for exhaustive search is significant, and in\nour work we look to scale these parameters. In the next subsection,\ndiscuss how a genetic search algorithm can be used to converge\ntoward the optimum scheduling assignments.\n3.2 Genetic algorithm\nGiven an exponential search space of business process\nassignments to web service providers, the problem is to find the optimal\nassignment that produces the overall highest aggregate business\nvalue across all workflows. To explore the solution space, we use\na genetic algorithm (GA) search heuristic that simulates Darwinian\nnatural selection by having members of a population compete to\nsurvive in order to pass their genetic chromosomes onto the next\ngeneration; after successive generations, there is a tendency for the\nchromosomes to converge toward the best combination [5] [6].\nAlthough other search heuristics exist that can solve\noptimization problems (e.g. simulated annealing or steepest-ascent\nhillclimbing), the business process scheduling problem fits well with a\nGA because potential solutions can be represented in a matrix form\nand allows us to use prior research in effective GA chromosome\nrecombination to form new members of the population (e.g. [2]).\n0 1 2 3 4\n0 1 2 0 2 1\n1 0 1 0 1 0\n2 1 2 0 0 1\nFigure 2: An example chromosome representing a scheduling\nassignment of (workflow,service type) \u2192 service provider. Each\nrow represents a workflow, and each column represents a\nservice type. For example, here there are 3 workflows (0 to 2) and\n5 service types (0 to 4). In workflow 0, any request for service\ntype 3 goes to provider 2. Note that the service provider\nidentifier is within a range limited to its service type (i.e. its column),\nso the 2 listed for service type 3 is a different server from\nserver 2 in other columns.\nChromosome representation of a solution. In Figure 2 we\nshow an example chromosome that encodes one scheduling\nassignment. The representation is a 2-dimensional matrix that maps\n{workflow, service type} to a service provider. For a business\nprocess in workflow i and utilising service type j, the (i, j)th\nentry in\nthe table is the identifier for the service provider to which the\nbusiness process is assigned. Note that the service provider identifier is\nwithin a range limited to its service type.\nGA execution. A GA proceeds as follows. Initially a random\nset of chromosomes is created for the population. The\nchromosomes are evaluated (hashed) to some metric, and the best ones\nare chosen to be parents. In our problem, the evaluation produces\nthe net business value across all workflows after executing all\nbusiness processes once they are assigned to their respective service\nproviders according to the mapping in the chromosome. The\nparents recombine to produce children, simulating sexual crossover,\nand occasionally a mutation may arise which produces new\ncharacteristics that were not available in either parent. The principal idea\nis that we would like the children to be different from the parents\n(in order to explore more of the solution space) yet not too\ndifferent (in order to contain the portions of the chromosome that result\nin good scheduling assignments). Note that finding the global\noptimum is not guaranteed because the recombination and mutation\nare stochastic.\nGA recombination and mutation. As mentioned, the\nchromosomes are 2-dimensional matrices that represent scheduling\nassignments. To simulate sexual recombination of two chromosomes to\nproduce a new child chromosome, we applied a one-point crossover\nscheme twice (once along each dimension). The crossover is best\nexplained by analogy to Cartesian space as follows. A random\npoint is chosen in the matrix to be coordinate (0, 0). Matrix\nelements from quadrants II and IV from the first parent and elements\nfrom quadrants I and III from the second parent are used to create\nthe new child. This approach follows GA best practices by keeping\ncontiguous chromosome segments together as they are transmitted\nfrom parent to child.\nThe uni-chromosome mutation scheme randomly changes one\nof the service provider assignments to another provider within the\navailable range. Other recombination and mutation schemes are an\narea of research in the GA community, and we look to explore new\noperators in future work.\nGA evaluation function. An important GA component is the\nevaluation function. Given a particular chromosome representing\none scheduling mapping, the function deterministically calculates\nthe net business value across all workloads. The business\nprocesses in each workload are assigned to service providers, and each\nprovider\"s completion time is calculated based on the service\nagreement guarantee using the parameters mentioned in Section 3.1,\nnamely the unloaded completion time \u03b1, the maximum\nconcur32\nrency \u03b2, and a coefficient \u03b3 that controls the linear performance\ndegradation under heavy load. Note that the evaluation function\ncan be easily replaced if desired; for example, other evaluation\nfunctions can model different service provider guarantees or\nparallel workflows.\n4. EXPERIMENTS AND RESULTS\nIn this section we show the benefit of using our GA-based\nscheduler. Because we wanted to scale the scenarios up to a large number\nof workflows (up to 1000 in our experiments), we implemented a\nsimulation program that allowed us to vary parameters and to\nmeasure the results with different metrics. The simulator was written\nin standard C++ and was run on a Linux (Fedora Core) desktop\ncomputer running at 2.8 GHz with 1GB of RAM.\nWe compared our algorithm against alternative candidates:\n\u2022 A well-known round-robin algorithm that assigns each\nbusiness process in circular fashion to the service providers for a\nparticular service type. This approach provides the simplest\nscheme for load-balancing.\n\u2022 A random-proportional algorithm that proportionally assigns\nbusiness processes to the service providers; that is, for a\ngiven service type, the service providers are ranked by their\nguaranteed completion time, and business processes are\nassigned proportionally to the providers based on their\ncompletion time. (We also tried a proportionality scheme based\non both the completion times and maximum concurrency but\nattained the same results, so only the former scheme\"s results\nare shown here.)\n\u2022 A strawman greedy algorithm that always assigns business\nprocesses to the service provider that has the fastest\nguaranteed completion time. This algorithm represents a naive\napproach based on greedy, local observations of each workflow\nwithout taking into consideration all workflows.\nIn the experiments that follow, all results were averaged across\n20 trials, and to help normalise the effects of randomisation used\nduring the GA, each trial started by reading in pre-initialised data\nfrom disk. In Table 1 we list our experimental parameters.\nIn Figure 3 we show the results of running our GA against the\nthree candidate alternatives. The x-axis shows the number for\nworkflows scaled up to 1000, and the y-axis shows the aggregate\nbusiness value for all workflows. As can be seen, the GA consistently\nproduces the highest business value even as the number of\nworkflows grows; at 1000 workflows, the GA produces a 115%\nimprovement over the next-best alternative. (Note that although we\nare optimising against the business value metric we defined earlier,\ngenetic algorithms are able to converge towards the optimal value\nof any metric, as long as the evaluation function can consistently\nmeasure a chromosome\"s value with that metric.)\nAs expected, the greedy algorithm performs very poorly because\nit does the worst job at balancing load: all business processes for\na given service type are assigned to only one server (the one\nadvertised to have the fastest completion time), and as more\nbusiness processes arrive, the provider\"s performance degrades linearly.\nThe round-robin scheme is initially outperformed by the\nrandomproportional scheme up to around 120 workflows (as shown in the\nmagnified graph of Figure 4), but as the number of workflows\nincreases, the round-robin scheme consistently wins over\nrandomproportional. The reason is that although the random-proportional\nscheme assigns business processes to providers proportionally\naccording to the advertised completion times (which is a measure of\nthe power of the service provider), even the best providers will\neventually reach a real-world maximum concurrency for the large\n-2000\n-1000\n0\n1000\n2000\n3000\n4000\n5000\n6000\n7000\n0 200 400 600 800 1000\nAggregatebusinessvalueacrossallworkflows\nTotal number of workflows\nBusiness value scores of scheduling algorithms\nGenetic algorithm\nRound robin\nRandom proportional\nGreedy\nFigure 3: Net business value scores of different scheduling algorithms.\n-500\n0\n500\n1000\n1500\n2000\n2500\n3000\n3500\n4000\n0 50 100 150 200Aggregatebusinessvalueacrossallworkflows\nTotal number of workflows\nBusiness value scores of scheduling algorithms\nGenetic algorithm\nRound robin\nRandom proportional\nGreedy\nFigure 4: Magnification of the left-most region in Figure 3.\nnumber of workflows that we are considering. For a very large\nnumber of workflows, the round-robin scheme is able to better\nbalance the load across all service providers.\nTo better understand the behaviour resulting from the scheduling\nassignments, we show the workflow completion results in Figures\n5, 6, and 7 for 100, 500, and 900 workflows, respectively. These\nfigures show the percentage of workflows that are successful (can\ncomplete with their QoS limit), acceptable (can complete within\n\u03ba=3 times their QoS limit), and failed (cannot complete within \u03ba=3\ntimes their QoS limit). The GA consistently produces the highest\npercentage of successful workflows (resulting in higher business\nvalues for the aggregate set of workflows). Further, the round-robin\nscheme produces better results than the random-proportional for a\nlarge number of workflows but does not perform as well as the GA.\nIn Figure 8 we graph the makespan resulting from the same\nexperiments above. Makespan is a traditional metric from the job\nscheduling community measuring elapsed time for the last job to\ncomplete. While useful, it does not capture the high-level business\nvalue metric that we are optimising against. Indeed, the makespan\nis oblivious to the fact that we provide multiple levels of\ncompletion (successful, acceptable, and failed) and assign business value\nscores accordingly. For completeness, we note that the GA\nprovides the fastest makespan, but it is matched by the round robin\nalgorithm. The GA produces better business values (as shown in\nFigure 3) because it is able to search the solution space to find\nbetter mappings that produce more successful workflows (as shown in\nFigures 5 to 7).\nWe also looked at the effect of the scheduling algorithms on\nbalancing the load. Figure 9 shows the percentage of services\nproviders that were accessed while the workflows ran. As expected,\nthe greedy algorithm always hits one service provider; on the other\nhand, the round-robin algorithm is the fastest to spread the business\n33\nExperimental parameter Comment\nWorkflows 5 to 1000\nBusiness processes per workflow uniform random: 1 - 10\nService types 10\nService providers per service type uniform random: 1 - 10\nWorkflow QoS goal uniform random: 10-30 seconds\nService provider completion time (\u03b1) uniform random: 1 - 12 seconds\nService provider maximum concurrency (\u03b2) uniform random: 1 - 12\nService provider degradation coefficient (\u03b3) uniform random: 0.1 - 0.9\nBusiness value for successful workflows uniform random: 10 - 50 points\nBusiness value for acceptable workflows uniform random: 0 - 10 points\nBusiness value for failed workflows uniform random: -10 - 0 points\nGA: number of parents 20\nGA: number of children 80\nGA: number of generations 1000\nTable 1: Experimental parameters\nFailed\nAcceptable (completed but not within QoS)\nSuccessful (completed within QoS)\n0%\n20%\n40%\n60%\n80%\n100%\nRoundRobinRandProportionalGreedyGeneticAlg\nPercentageofallworkflows\nWorkflow behaviour, 100 workflows\nFigure 5: Workflow behaviour for 100 workflows.\nFailed\nAcceptable (completed but not within QoS)\nSuccessful (completed within QoS)\n0%\n20%\n40%\n60%\n80%\n100%\nRoundRobinRandProportionalGreedyGeneticAlg\nPercentageofallworkflows\nWorkflow behaviour, 500 workflows\nFigure 6: Workflow behaviour for 500 workflows.\nFailed\nAcceptable (completed but not within QoS)\nSuccessful (completed within QoS)\n0%\n20%\n40%\n60%\n80%\n100%\nRoundRobinRandProportionalGreedyGeneticAlg\nPercentageofallworkflows\nWorkflow behaviour, 500 workflows\nFigure 7: Workflow behaviour for 900 workflows.\n0\n50\n100\n150\n200\n250\n300\n0 200 400 600 800 1000\nMakespan[seconds]\nNumber of workflows\nMaximum completion time for all workflows\nGenetic algorithm\nRound robin\nRandom proportional\nGreedy\nFigure 8: Maximum completion time for all workflows. This value\nis the makespan metric used in traditional scheduling research.\nAlthough useful, the makespan does not take into consideration the\nbusiness value scoring in our problem domain.\nprocesses. Figure 10 is the percentage of accessed service providers\n(that is, the percentage of service providers represented in Figure\n9) that had more assigned business processes than their advertised\nmaximum concurrency. For example, in the greedy algorithm only\none service provider is utilised, and this one provider quickly\nbecomes saturated. On the other hand, the random-proportional\nalgorithm uses many service providers, but because business processes\nare proportionally assigned with more assignments going to the\nbetter providers, there is a tendency for a smaller percentage of\nproviders to become saturated.\nFor completeness, we show the performance of the genetic\nalgorithm itself in Figure 11. The algorithm scales linearly with an\nincreasing number of workflows. We note that the round-robin,\nrandom-proportional, and greedy algorithms all finished within 1\nsecond even for the largest workflow configuration. However, we\nfeel that the benefit of finding much higher business value scores\njustifies the running time of the GA; further we would expect that\nthe running time will improve with both software tuning as well as\nwith a computer faster than our off-the-shelf PC.\n5. CONCLUSION\nBusiness processes within workflows can be orchestrated to\naccess web services. In this paper we looked at multi-tiered service\nprovisioning where web service requests to service types can be\nmapped to different service providers. The resulting problem is\nthat in order to support a very large number of workflows, the\nassignment of business process to web service provider must be\nintelligent. We used a business value metric to measure the\nbe34\n0\n0.2\n0.4\n0.6\n0.8\n1\n0 200 400 600 800 1000\nPercentageofallserviceproviders\nNumber of workflows\nService providers utilised\nGenetic algorithm\nRound robin\nRandom proportional\nGreedy\nFigure 9: The percentage of service providers utilized during\nworkload executions. The Greedy algorithm always hits the one service\nprovider, while the Round Robin algorithm spreads requests evenly\nacross the providers.\n0\n0.2\n0.4\n0.6\n0.8\n1\n0 200 400 600 800 1000\nPercentageofallserviceproviders\nNumber of workflows\nService providers saturated\nGenetic algorithm\nRound robin\nRandom proportional\nGreedy\nFigure 10: The percentage of service providers that are saturated\namong those providers who were utilized (that is, percentage of the\nservice providers represented in Figure 9). A saturated service provider\nis one whose workload is greater that its advertised maximum\nconcurrency.\n0\n5\n10\n15\n20\n25\n0 200 400 600 800 1000\nRunningtimeinseconds\nTotal number of workflows\nRunning time of genetic algorithm\nGA running time\nFigure 11: Running time of the genetic algorithm.\nhaviour of workflows meeting or failing QoS values, and we\noptimised our scheduling to maximise the aggregate business value\nacross all workflows. Since the solution space of scheduler\nmappings is exponential, we used a genetic search algorithm to search\nthe space and converge toward the best schedule. With a default\nconfiguration for all parameters and using our business value\nscoring, the GA produced up to 115% business value improvement over\nthe next best algorithm. Finally, because a genetic algorithm will\nconverge towards the optimal value using any metric (even other\nthan the business value metric we used), we believe our approach\nhas strong potential for continuing work.\nIn future work, we look to acquire real-world traces of web\nservice instances in order to get better estimates of service agreement\nguarantees, although we expect that such guarantees between the\nproviders and their consumers are not generally available to the\npublic. We will also look at other QoS metrics such as CPU and\nI/O usage. For example, we can analyse transfer costs with\nvarying bandwidth, latency, data size, and data distribution. Further,\nwe hope to improve our genetic algorithm and compare it to more\nscheduler alternatives. Finally, since our work is complementary\nto existing work in web services choreography (because we rely on\npre-configured workflows), we look to integrate our approach with\navailable web service workflow systems expressed in BPEL.\n6. REFERENCES\n[1] A. Ankolekar, et al. DAML-S: Semantic Markup For Web\nServices, In Proc. of the Int\"l Semantic Web Working\nSymposium, 2001.\n[2] L. Davis. Job Shop Scheduling with Genetic Algorithms,\nIn Proc. of the Int\"l Conference on Genetic Algorithms, 1985.\n[3] H.-L. Fang, P. Ross, and D. Corne. A Promising Genetic\nAlgorithm Approach to Job-Shop Scheduling, Rescheduling,\nand Open-Shop Scheduling Problems , In Proc. on the 5th\nInt\"l Conference on Genetic Algorithms, 1993.\n[4] M. Gary and D. Johnson. Computers and Intractability: A\nGuide to the Theory of NP-Completeness, Freeman, 1979.\n[5] J. Holland. Adaptation in Natural and Artificial Systems:\nAn Introductory Analysis with Applications to Biology,\nControl, and Artificial Intelligence, MIT Press, 1992.\n[6] D. Goldberg. Genetic Algorithms in Search, Optimization\nand Machine Learning, Kluwer Academic Publishers, 1989.\n[7] Business Processes in a Web Services World,\nwww-128.ibm.com/developerworks/\nwebservices/library/ws-bpelwp/.\n[8] G. Soundararajan, K. Manassiev, J. Chen, A. Goel, and C.\nAmza. Back-end Databases in Shared Dynamic Content\nServer Clusters, In Proc. of the IEEE Int\"l Conference on\nAutonomic Computing, 2005.\n[9] B. Srivastava and J. Koehler. Web Service Composition\nCurrent Solutions and Open Problems, ICAP, 2003.\n[10] B. Urgaonkar, P. Shenoy, A. Chandra, and P. Goyal.\nDynamic Provisioning of Multi-Tier Internet Applications,\nIn Proc. of the IEEE Int\"l Conference on Autonomic\nComputing, 2005.\n[11] L. Zeng, B. Benatallah, M. Dumas, J. Kalagnanam, and Q.\nSheng. Quality Driven Web Services Composition, In\nProc. of the WWW Conference, 2003.\n35", "keywords": "qo;service request;scheduling service;qos-defined limit;multi-tiered system;end-to-end workflow composition;work\ufb02ows;multi-organisation environment;business process workflow;web service;streamlined functionality;business value metric;scheduling agent;heuristic;schedule"} {"name": "train_C-67", "title": "A Holistic Approach to High-Performance Computing: Xgrid Experience", "abstract": "The Ringling School of Art and Design is a fully accredited fouryear college of visual arts and design. With a student to computer ratio of better than 2-to-1, the Ringling School has achieved national recognition for its large-scale integration of technology into collegiate visual art and design education. We have found that Mac OS X is the best operating system to train future artists and designers. Moreover, we can now buy Macs to run high-end graphics, nonlinear video editing, animation, multimedia, web production, and digital video applications rather than expensive UNIX workstations. As visual artists cross from paint on canvas to creating in the digital realm, the demand for a highperformance computing environment grows. In our public computer laboratories, students use the computers most often during the workday; at night and on weekends the computers see only light use. In order to harness the lost processing time for tasks such as video rendering, we are testing Xgrid, a suite of Mac OS X applications recently developed by Apple for parallel and distributed high-performance computing. As with any new technology deployment, IT managers need to consider a number of factors as they assess, plan, and implement Xgrid. Therefore, we would like to share valuable information we learned from our implementation of an Xgrid environment with our colleagues. In our report, we will address issues such as assessing the needs for grid computing, potential applications, management tools, security, authentication, integration into existing infrastructure, application support, user training, and user support. Furthermore, we will discuss the issues that arose and the lessons learned during and after the implementation process.", "fulltext": "1. INTRODUCTION\nGrid computing does not have a single, universally accepted\ndefinition. The technology behind grid computing model is not\nnew. Its roots lie in early distributed computing models that date\nback to early 1980s, where scientists harnessed the computing\npower of idle workstations to let compute intensive applications\nto run on multiple workstations, which dramatically shortening\nprocessing times. Although numerous distributed computing\nmodels were available for discipline-specific scientific\napplications, only recently have the tools became available to use\ngeneral-purpose applications on a grid. Consequently, the grid\ncomputing model is gaining popularity and has become a show\npiece of \"utility computing\". Since in the IT industry, various\ncomputing models are used interchangeably with grid computing,\nwe first sort out the similarities and difference between these\ncomputing models so that grid computing can be placed in\nperspective.\n1.1 Clustering\nA cluster is a group of machines in a fixed configuration united to\noperate and be managed as a single entity to increase robustness\nand performance. The cluster appears as a single high-speed\nsystem or a single highly available system. In this model,\nresources can not enter and leave the group as necessary. There\nare at least two types of clusters: parallel clusters and\nhighavailability clusters. Clustered machines are generally in spatial\nproximity, such as in the same server room, and dedicated solely\nto their task.\nIn a high-availability cluster, each machine provides the same\nservice. If one machine fails, another seamlessly takes over its\nworkload. For example, each computer could be a web server for\na web site. Should one web server \"die,\" another provides the\nservice, so that the web site rarely, if ever, goes down.\nA parallel cluster is a type of supercomputer. Problems are split\ninto many parts, and individual cluster members are given part of\nthe problem to solve. An example of a parallel cluster is\ncomposed of Apple Power Mac G5 computers at Virginia Tech\nUniversity [1].\n1.2 Distributed Computing\nDistributed computing spatially expands network services so that\nthe components providing the services are separated. The major\nobjective of this computing model is to consolidate processing\npower over a network. A simple example is spreading services\nsuch as file and print serving, web serving, and data storage across\nmultiple machines rather than a single machine handling all the\ntasks. Distributed computing can also be more fine-grained, where\neven a single application is broken into parts and each part located\non different machines: a word processor on one server, a spell\nchecker on a second server, etc.\n1.3 Utility Computing\nLiterally, utility computing resembles common utilities such as\ntelephone or electric service. A service provider makes computing\nresources and infrastructure management available to a customer\nas needed, and charges for usage rather than a flat rate. The\nimportant thing to note is that resources are only used as needed,\nand not dedicated to a single customer.\n1.4 Grid Computing\nGrid computing contains aspects of clusters, distributed\ncomputing, and utility computing. In the most basic sense, grid\nturns a group of heterogeneous systems into a centrally managed\nbut flexible computing environment that can work on tasks too\ntime intensive for the individual systems. The grid members are\nnot necessarily in proximity, but must merely be accessible over a\nnetwork; the grid can access computers on a LAN, WAN, or\nanywhere in the world via the Internet. In addition, the computers\ncomprising the grid need not be dedicated to the grid; rather, they\ncan function as normal workstations, and then advertise their\navailability to the grid when not in use.\nThe last characteristic is the most fundamental to the grid\ndescribed in this paper. A well-known example of such an ad\nhoc grid is the SETI@home project [2] of the University of\nCalifornia at Berkeley, which allows any person in the world with\na computer and an Internet connection to donate unused processor\ntime for analyzing radio telescope data.\n1.5 Comparing the Grid and Cluster\nA computer grid expands the capabilities of the cluster by loosing\nits spatial bounds, so that any computer accessible through the\nnetwork gains the potential to augment the grid. A fundamental\ngrid feature is that it scales well. The processing power of any\nmachine added to the grid is immediately availably for solving\nproblems. In addition, the machines on the grid can be\ngeneralpurpose workstations, which keep down the cost of expanding the\ngrid.\n2. ASSESSING THE NEED FOR GRID\nCOMPUTING\nEffective use of a grid requires a computation that can be divided\ninto independent (i.e., parallel) tasks. The results of each task\ncannot depend on the results of any other task, and so the\nmembers of the grid can solve the tasks in parallel. Once the tasks\nhave been completed, the results can be assembled into the\nsolution. Examples of parallelizable computations are the\nMandelbrot set of fractals, the Monte Carlo calculations used in\ndisciplines such as Solid State Physics, and the individual frames\nof a rendered animation. This paper is concerned with the last\nexample.\n2.1 Applications Appropriate for Grid\nComputing\nThe applications used in grid computing must either be\nspecifically designed for grid use, or scriptable in such a way that\nthey can receive data from the grid, process the data, and then\nreturn results. In other words, the best candidates for grid\ncomputing are applications that run the same or very similar\ncomputations on a large number of pieces of data without any\ndependencies on the previous calculated results. Applications\nheavily dependent on data handling rather than processing power\nare generally more suitable to run on a traditional environment\nthan on a grid platform. Of course, the applications must also run\non the computing platform that hosts the grid. Our interest is in\nusing the Alias Maya application [3] with Apple\"s Xgrid [4] on\nMac OS X.\nCommercial applications usually have strict license requirements.\nThis is an important concern if we install a commercial\napplication such as Maya on all members of our grid. By its\nnature, the size of the grid may change as the number of idle\ncomputers changes. How many licenses will be required? Our\nresolution of this issue will be discussed in a later section.\n2.2 Integration into the Existing\nInfrastructure\nThe grid requires a controller that recognizes when grid members\nare available, and parses out job to available members. The\ncontroller must be able to see members on the network. This does\nnot require that members be on the same subnet as the controller,\nbut if they are not, any intervening firewalls and routers must be\nconfigured to allow grid traffic.\n3. XGRID\nXgrid is Apple\"s grid implementation. It was inspired by Zilla, a\ndesktop clustering application developed by NeXT and acquired\nby Apple. In this report we describe the Xgrid Technology\nPreview 2, a free download that requires Mac OS X 10.2.8 or later\nand a minimum 128 MB RAM [5].\nXgrid, leverages Apple\"s traditional ease of use and configuration.\nIf the grid members are on the same subnet, by default Xgrid\nautomatically discovers available resources through Rendezvous\n[6]. Tasks are submitted to the grid through a GUI interface or by\nthe command line. A System Preference Pane controls when each\ncomputer is available to the grid.\nIt may be best to view Xgrid as a facilitator. The Xgrid\narchitecture handles software and data distribution, job execution,\nand result aggregation. However, Xgrid does not perform the\nactual calculations.\n3.1 Xgrid Components\nXgrid has three major components: the client, controller, and the\nagent. Each component is included in the default installation, and\nany computer can easily be configured to assume any role. In\n120\nfact, for testing purposes, a computer can simultaneously assume\nall roles in local mode. The more typical production use is\ncalled cluster mode.\nThe client submits jobs to the controller through the Xgrid GUI or\ncommand line. The client defines how the job will be broken into\ntasks for the grid. If any files or executables must be sent as part\nof a job, they must reside on the client or at a location accessible\nto the client. When a job is complete, the client can retrieve the\nresults from the controller. A client can only connect to a single\ncontroller at a time.\nThe controller runs the GridServer process. It queues tasks\nreceived from clients, distributes those tasks to the agents, and\nhandles failover if an agent cannot complete a task. In Xgrid\nTechnology Preview 2, a controller can handle a maximum of\n10,000 agent connections. Only one controller can exist per\nlogical grid.\nThe agents run the GridAgent process. When the GridAgent\nprocess starts, it registers with a controller; an agent can only be\nconnected to one controller at a time. Agents receive tasks from\ntheir controller, perform the specified computations, and then\nsend the results back to the controller. An agent can be configured\nto always accept tasks, or to just accept them when the computer\nis not otherwise busy.\n3.2 Security and Authentication\nBy default, Xgrid requires two passwords. First, a client needs a\npassword to access a controller. Second, the controller needs a\npassword to access an agent. Either password requirement can be\ndisabled. Xgrid uses two-way-random mutual authentication\nprotocol with MD5 hashes. At this time, data encryption is only\nused for passwords.\nAs mentioned earlier, an agent registers with a controller when the\nGridAgent process starts. There is no native method for the\ncontroller to reject agents, and so it must accept any agent that\nregisters. This means that any agent could submit a job that\nconsumes excessive processor and disk space on the agents. Of\ncourse, since Mac OS X is a BSD-based operating system, the\ncontroller could employ Unix methods of restricting network\nconnections from agents.\nThe Xgrid daemons run as the user nobody, which means the\ndaemons can read, write, or execute any file according to world\npermissions. Thus, Xgrid jobs can execute many commands and\nwrite to /tmp and /Volumes. In general, this is not a major security\nrisk, but is does require a level of trust between all members of the\ngrid.\n3.3 Using Xgrid\n3.3.1 Installation\nBasic Xgrid installation and configuration is described both in\nApple documentation [5] and online at the University of Utah web\nsite [8]. The installation is straightforward and offers no options\nfor customization. This means that every computer on which\nXgrid is installed has the potential to be a client, controller, or\nagent.\n3.3.2 Agent and Controller Configuration\nThe agents and controllers can be configured through the Xgrid\nPreference Pane in the System Preferences or XML files in\n/Library/Preferences. Here the GridServer and GridAgent\nprocesses are started, passwords set, and the controller discovery\nmethod used by agents is selected. By default, agents use\nRendezvous to find a controller, although the agents can also be\nconfigured to look for a specific host.\nThe Xgrid Preference Pane also sets whether the Agents will\nalways accept jobs, or only accept jobs when idle. In Xgrid terms,\nidle either means that the Xgrid screen saver has activated, or the\nmouse and keyboard have not been used for more than 15\nminutes. Even if the agent is configured to always accept tasks, if\nthe computer is being used these tasks will run in the background\nat a low priority.\nHowever, if an agent only accepts jobs when idle, any unfinished\ntask being performed when the computer ceases being idle are\nimmediately stopped and any intermediary results lost. Then the\ncontroller assigns the task to another available member of the\ngrid.\nAdvertising the controller via Rendezvous can be disabled by\nediting /Library/Preferences/com.apple.xgrid.controller.plist. This,\nhowever, will not prevent an agent from connecting to the\ncontroller by hostname.\n3.3.3 Sending Jobs from an Xgrid Client\nThe client sends jobs to the controller either through the Xgrid\nGUI or the command line. The Xgrid GUI submits jobs via small\napplications called plug-ins. Sample plug-ins are provided by\nApple, but they are only useful as simple testing or as examples of\nhow to create a custom plug-in. If we are to employ Xgrid for\nuseful work, we will require a custom plug-in.\nJames Reynolds details the creation of custom plug-ins on the\nUniversity of Utah Mac OS web site [8]. Xgrid stores plug-ins in\n/Library/Xgrid/Plug-ins or ~/Library/Xgrid/Plug-ins, depending\non whether the plug-in was installed with Xgrid or created by a\nuser.\nThe core plug-in parameter is the command, which includes the\nexecutable the agents will run. Another important parameter is the\nworking directory. This directory contains necessary files that\nare not installed on the agents or available to them over a network.\nThe working directory will always be copied to each agent, so it is\nbest to keep this directory small. If the files are installed on the\nagents or available over a network, the working directory\nparameter is not needed.\nThe command line allows the options available with the GUI\nplug-in, but it can be slightly more cumbersome. However, the\ncommand line probably will be the method of choice for serious\nwork. The command arguments must be included in a script\nunless they are very basic. This can be a shell, perl, or python\nscript, as long as the agent can interpret it.\n3.3.4 Running the Xgrid Job\nWhen the Xgrid job is started, the command tells the controller\nhow to break the job into tasks for the agents. Then the command\nis tarred and gzipped and sent to each agent; if there is a working\ndirectory, this is also tarred and gzipped and sent to the agents.\n121\nThe agents extract these files into /tmp and run the task. Recall\nthat since the GridAgent process runs as the user nobody,\neverything associated with the command must be available to\nnobody.\nExecutables called by the command should be installed on the\nagents unless they are very simple. If the executable depends on\nlibraries or other files, it may not function properly if transferred,\neven if the dependent files are referenced in the working directory.\nWhen the task is complete, the results are available to the client.\nIn principle, the results are sent to the client, but whether this\nactually happens depends on the command. If the results are not\nsent to the client, they will be in /tmp on each agent. When\navailable, a better solution is to direct the results to a network\nvolume accessible to the client.\n3.4 Limitations and Idiosyncrasies\nSince Xgrid is only in its second preview release, there are some\nrough edges and limitations. Apple acknowledges some\nlimitations [7]. For example, the controller cannot determine\nwhether an agent is trustworthy and the controller always copies\nthe command and working directory to the agent without checking\nto see if these exist on the agent.\nOther limitations are likely just a by-product of an unfinished\nwork. Neither the client nor controller can specify which agents\nwill receive the tasks, which is particularly important if the agents\ncontain a variety of processor types and speeds and the user wants\nto optimize the calculations. At this time, the best solution to this\nproblem may be to divide the computers into multiple logical\ngrids. There is also no standard way to monitor the progress of a\nrunning job on each agent. The Xgrid GUI and command line\nindicate which agents are working on tasks, but gives no\nindication of progress.\nFinally, at this time only Mac OS X clients can submit jobs to the\ngrid. The framework exists to allow third parties to write plug-ins\nfor other Unix flavors, but Apple has not created them.\n4. XGRID IMPLEMENTATION\nOur goal is an Xgrid render farm for Alias Maya. The Ringling\nSchool has about 400 Apple Power Mac G4\"s and G5\"s in 13\ncomputer labs. The computers range from 733 MHz\nsingleprocessor G4\"s and 500 MHz and 1 GHz dual-processor G4\"s to\n1.8 GHz dual-processor G5\"s. All of these computers are lightly\nused in the evening and on weekends and represent an enormous\nprocessing resource for our student rendering projects.\n4.1 Software Installation\nDuring our Xgrid testing, we loaded software on each computer\nmultiple times, including the operating systems. We saved time by\nfacilitating our installations with the remote administration\ndaemon (radmind) software developed at the University of\nMichigan [9], [10].\nEverything we installed for testing was first created as a radmind\nbase load or overload. Thus, Mac OS X, Mac OS X Developer\nTools, Xgrid, POV-Ray [11], and Alias Maya were stored on a\nradmind server and then installed on our test computers when\nneeded.\n4.2 Initial Testing\nWe used six 1.8 GHz dual-processor Apple Power Mac G5\"s for\nour Xgrid tests. Each computer ran Mac OS X 10.3.3 and\ncontained 1 GB RAM. As shown in Figure 1, one computer\nserved as both client and controller, while the other five acted as\nagents.\nBefore attempting Maya rendering with Xgrid, we performed\nbasic calculations to cement our understanding of Xgrid. Apple\"s\nXgrid documentation is sparse, so finding helpful web sites\nfacilitated our learning.\nWe first ran the Mandelbrot set plug-in provided by Apple, which\nallowed us to test the basic functionality of our grid. Then we\nperformed benchmark rendering with the Open Source\nApplication POV-Ray, as described by Daniel C\u00f4t\u00e9 [12] and\nJames Reynolds [8]. Our results showed that one dual-processor\nG5 rendering the benchmark POV-Ray image took 104 minutes.\nBreaking the image into three equal parts and using Xgrid to send\nthe parts to three agents required 47 minutes. However, two\nagents finished their rendering in 30 minutes, while the third\nagent used 47 minutes; the entire render was only as fast as the\nslowest agent.\nThese results gave us two important pieces of information. First,\nthe much longer rendering time for one of the tasks indicated that\nwe should be careful how we split jobs into tasks for the agents.\nAll portions of the rendering will not take equal amounts of time,\neven if the pixel size is the same. Second, since POV-Ray cannot\ntake advantage of both processors in a G5, neither can an Xgrid\ntask running POV-Ray. Alias Maya does not have this limitation.\n4.3 Rendering with Alias Maya 6\nWe first installed Alias Maya 6 for Mac OS X on the\nclient/controller and each agent. Maya 6 requires licenses for use\nas a workstation application. However, if it is just used for\nrendering from the command line or a script, no license is needed.\nWe thus created a minimal installation of Maya as a radmind\noverload. The application was installed in a hidden directory\ninside /Applications. This was done so that normal users of the\nworkstations would not find and attempt to run Maya, which\nwould fail because these installations are not licensed for such\nuse.\nIn addition, Maya requires the existence of a directory ending in\nthe path /maya. The directory must be readable and writable by\nthe Maya user. For a user running Maya on a Mac OS X\nworkstation, the path would usually be ~/Documents/maya.\nUnless otherwise specified, this directory will be the default\nlocation for Maya data and output files. If the directory does not\nFigure 1. Xgrid test grid.\nClient/\nController\nAgent 1\nAgent 2\nAgent 3\nAgent 4\nAgent 5\nNetwork\nVolume\nJobs\nData\nData\n122\nexist, Maya will try to create it, even if the user specifies that the\ndata and output files exist in other locations.\nHowever, Xgrid runs as the user nobody, which does not have a\nhome directory. Maya is unable to create the needed directory,\nand looks instead for /Alias/maya. This directory also does not\nexist, and the user nobody has insufficient rights to create it. Our\nsolution was to manually create /Alias/maya and give the user\nnobody read and write permissions.\nWe also created a network volume for storage of both the\nrendering data and the resulting rendered frames. This avoided\nsending the Maya files and associated textures to each agent as\npart of a working directory. Such a solution worked well for us\nbecause our computers are geographically close on a LAN; if\ngreater distance had separated the agents from the\nclient/controller, specifying a working directory may have been a\nbetter solution.\nFinally, we created a custom GUI plug-in for Xgrid. The plug-in\ncommand calls a Perl script with three arguments. Two arguments\nspecify the beginning and end frames of the render and the third\nargument the number of frames in each job (which we call the\ncluster size). The script then calculates the total number of jobs\nand parses them out to the agents. For example, if we begin at\nframe 201 and end at frame 225, with 5 frames for each job, the\nplug-in will create 5 jobs and send them out to the agents.\nOnce the jobs are sent to the agents, the script executes the\n/usr/sbin/Render command on each agent with the parameters\nappropriate for the particular job. The results are sent to the\nnetwork volume.\nWith the setup described, we were able to render with Alias Maya\n6 on our test grid. Rendering speed was not important at this time;\nour first goal was to implement the grid, and in that we succeeded.\n4.3.1 Pseudo Code for Perl Script in Custom Xgrid\nPlug-in\nIn this section we summarize in simplified pseudo code format the\nPerl script used in our Xgrig plug-in.\nagent_jobs{\n\u2022 Read beginning frame, end frame, and cluster size of\nrender.\n\u2022 Check whether the render can be divided into an integer\nnumber of jobs based on the cluster size.\n\u2022 If there are not an integer number of jobs, reduce the cluster\nsize of the last job and set its last frame to the end frame of\nthe render.\n\u2022 Determine the start frame and end frame for each job.\n\u2022 Execute the Render command.\n}\n4.4 Lessons Learned\nRendering with Maya from the Xgrid GUI was not trivial. The\nlack of Xgrid documentation and the requirements of Maya\ncombined into a confusing picture, where it was difficult to decide\nthe true cause of the problems we encountered. Trial and error\nwas required to determine the best way to set up our grid.\nThe first hurdle was creating the directory /Alias/maya with read\nand write permissions for the user nobody. The second hurdle was\nlearning that we got the best performance by storing the rendering\ndata on a network volume.\nThe last major hurdle was retrieving our results from the agents.\nUnlike the POV-Ray rendering tests, our initial Maya results were\nnever returned to the client; instead, Maya stored the results in\n/tmp on each agent. Specifying in the plug-in where to send the\nresults would not change this behavior. We decided this was\nlikely a Maya issue rather than an Xgrid issue, and the solution\nwas to send the results to the network volume via the Perl script.\n5. FUTURE PLANS\nMaya on Xgrid is not yet ready to be used by the students of\nRingling School. In order to do this, we must address at least the\nfollowing concerns.\n\u2022 Continue our rendering tests through the command line\nrather than the GUI plug-in. This will be essential for the\nfollowing step.\n\u2022 Develop an appropriate interface for users to send jobs to the\nXgrid controller. This will probably be an extension to the\nweb interface of our existing render farm, where the student\nspecifies parameters that are placed in a script that issues the\nRender command.\n\u2022 Perform timed Maya rendering tests with Xgrid. Part of this\nshould compare the rendering times for Power Mac G4\"s and\nG5\"s.\n6. CONCLUSION\nGrid computing continues to advance. Recently, the IT industry\nhas witnessed the emergence of numerous types of contemporary\ngrid applications in addition to the traditional grid framework for\ncompute intensive applications. For instance, peer-to-peer\napplications such as Kazaa, are based on storage grids that do not\nshare processing power but instead an elegant protocol to swap\nfiles between systems. Although in our campuses we discourage\nstudents from utilizing peer-to-peer applications from music\nsharing, the same protocol can be utilized on applications such as\ndecision support and data mining. The National Virtual\nCollaboratory grid project [13] will link earthquake researchers\nacross the U.S. with computing resources, allowing them to share\nextremely large data sets, research equipment, and work together\nas virtual teams over the Internet.\nThere is an assortment of new grid players in the IT world\nexpanding the grid computing model and advancing the grid\ntechnology to the next level. SAP [14] is piloting a project to\ngrid-enable SAP ERP applications, Dell [15] has partnered with\nPlatform Computing to consolidate computing resources and\nprovide grid-enabled systems for compute intensive applications,\nOracle has integrated support for grid computing in their 10g\nrelease [16], United Devices [17] offers hosting service for\ngridon-demand, and Sun Microsystems continues their research and\ndevelopment of Sun\"s N1 Grid engine [18] which combines grid\nand clustering platforms.\nSimply, the grid computing is up and coming. The potential\nbenefits of grid computing are colossal in higher education\nlearning while the implementation costs are low. Today, it would\nbe difficult to identify an application with as high a return on\ninvestment as grid computing in information technology divisions\nin higher education institutions. It is a mistake to overlook this\ntechnology with such a high payback.\n123\n7. ACKNOWLEDGMENTS\nThe authors would like to thank Scott Hanselman of the IT team\nat the Ringling School of Art and Design for providing valuable\ninput in the planning of our Xgrid testing. We would also like to\nthank the posters of the Xgrid Mailing List [13] for providing\ninsight into many areas of Xgrid.\n8. REFERENCES\n[1] Apple Academic Research,\nhttp://www.apple.com/education/science/profiles/vatech/.\n[2] SETI@home: Search for Extraterrestrial Intelligence at\nhome. http://setiathome.ssl.berkeley.edu/.\n[3] Alias, http://www.alias.com/.\n[4] Apple Computer, Xgrid, http://www.apple.com/acg/xgrid/.\n[5] Xgrid Guide, http://www.apple.com/acg/xgrid/, 2004.\n[6] Apple Mac OS X Features,\nhttp://www.apple.com/macosx/features/rendezvous/.\n[7] Xgrid Manual Page, 2004.\n[8] James Reynolds, Xgrid Presentation, University of Utah,\nhttp://www.macos.utah.edu:16080/xgrid/, 2004.\n[9] Research Systems Unix Group, Radmind, University of\nMichigan, http://rsug.itd.umich.edu/software/radmind.\n[10]Using the Radmind Command Line Tools to Maintain\nMultiple Mac OS X Machines,\n\nhttp://rsug.itd.umich.edu/software/radmind/files/radmindtutorial-0.8.1.pdf.\n[11]POV-Ray, http://www.povray.org/.\n[12]Daniel C\u00f4t\u00e9, Xgrid example: Parallel graphics rendering in\nPOVray, http://unu.novajo.ca/simple/, 2004.\n[13]NEESgrid, http://www.neesgrid.org/.\n[14]SAP, http://www.sap.com/.\n[15]Platform Computing, http://platform.com/.\n[16]Grid, http://www.oracle.com/technologies/grid/.\n[17]United Devices, Inc., http://ud.com/.\n[18]N1 Grid Engine 6, http://www.sun.com/\nsoftware/gridware/index.html/.\n[19]Xgrig Users Mailing List,\n\nhttp://www.lists.apple.com/mailman/listinfo/xgridusers/.\n124", "keywords": "render;multimedia;xgrid environment;xgrid;nonlinear video editing;macintosh os x;grid computing;large-scale integration of technology;web production;animation;digital video application;rendezvous;design;visual art;highperformance computing;design education;high-end graphic;mac os x;operating system;cluster"} {"name": "train_C-68", "title": "An Evaluation of Availability Latency in Carrier-based Vehicular ad-hoc Networks", "abstract": "On-demand delivery of audio and video clips in peer-to-peer vehicular ad-hoc networks is an emerging area of research. Our target environment uses data carriers, termed zebroids, where a mobile device carries a data item on behalf of a server to a client thereby minimizing its availability latency. In this study, we quantify the variation in availability latency with zebroids as a function of a rich set of parameters such as car density, storage per device, repository size, and replacement policies employed by zebroids. Using analysis and extensive simulations, we gain novel insights into the design of carrier-based systems. Significant improvements in latency can be obtained with zebroids at the cost of a minimal overhead. These improvements occur even in scenarios with lower accuracy in the predictions of the car routes. Two particularly surprising findings are: (1) a naive random replacement policy employed by the zebroids shows competitive performance, and (2) latency improvements obtained with a simplified instantiation of zebroids are found to be robust to changes in the popularity distribution of the data items.", "fulltext": "1. INTRODUCTION\nTechnological advances in areas of storage and wireless\ncommunications have now made it feasible to envision on-demand delivery\nof data items, for e.g., video and audio clips, in vehicular\npeer-topeer networks. In prior work, Ghandeharizadeh et al. [10]\nintroduce the concept of vehicles equipped with a\nCar-to-Car-Peer-toPeer device, termed AutoMata, for in-vehicle entertainment. The\nnotable features of an AutoMata include a mass storage device\noffering hundreds of gigabytes (GB) of storage, a fast processor, and\nseveral types of networking cards. Even with today\"s 500 GB disk\ndrives, a repository of diverse entertainment content may exceed\nthe storage capacity of a single AutoMata. Such repositories\nconstitute the focus of this study. To exchange data, we assume each\nAutoMata is configured with two types of networking cards: 1) a\nlow-bandwidth networking card with a long radio-range in the\norder of miles that enables an AutoMata device to communicate with\na nearby cellular or WiMax station, 2) a high-bandwidth\nnetworking card with a limited radio-range in the order of hundreds of feet.\nThe high bandwidth connection supports data rates in the\norder of tens to hundreds of Megabits per second and represents the\nad-hoc peer to peer network between the vehicles. This is\nlabelled as the data plane and is employed to exchange data items\nbetween devices. The low-bandwidth connection serves as the\ncontrol plane, enabling AutoMata devices to exchange meta-data with\none or more centralized servers. This connection offers bandwidths\nin the order of tens to hundreds of Kilobits per second. The\ncentralized servers, termed dispatchers, compute schedules of data\ndelivery along the data plane using the provided meta-data. These\nschedules are transmitted to the participating vehicles using the\ncontrol plane. The technical feasibility of such a two-tier\narchitecture is presented in [7], with preliminary results to demonstrate the\nbandwidth of the control plane is sufficient for exchange of control\ninformation needed for realizing such an application.\nIn a typical scenario, an AutoMata device presents a passenger\nwith a list of data items1\n, showing both the name of each data item\nand its availability latency. The latter, denoted as \u03b4, is defined as\nthe earliest time at which the client encounters a copy of its\nrequested data item. A data item is available immediately when it\nresides in the local storage of the AutoMata device serving the\nrequest. Due to storage constraints, an AutoMata may not store the\nentire repository. In this case, availability latency is the time from\nwhen the user issues a request until when the AutoMata encounters\nanother car containing the referenced data item. (The terms car and\nAutoMata are used interchangeably in this study.)\nThe availability latency for an item is a function of the current\nlocation of the client, its destination and travel path, the mobility\nmodel of the AutoMata equipped vehicles, the number of replicas\nconstructed for the different data items, and the placement of data\nitem replicas across the vehicles. A method to improve the\navailability latency is to employ data carriers which transport a replica\nof the requested data item from a server car containing it to a client\nthat requested it. These data carriers are termed \u2018zebroids\".\nSelection of zebroids is facilitated by the two-tiered architecture.\nThe control plane enables centralized information gathering at a\ndispatcher present at a base-station.2\nSome examples of control\nin1\nWithout loss of generality, the term data item might be either\ntraditional media such as text or continuous media such as an audio or\nvideo clip.\n2\nThere may be dispatchers deployed at a subset of the base-stations\nfor fault-tolerance and robustness. Dispatchers between\nbasestations may communicate via the wired infrastructure.\n75\nformation are currently active requests, travel path of the clients and\ntheir destinations, and paths of the other cars. For each client\nrequest, the dispatcher may choose a set of z carriers that collaborate\nto transfer a data item from a server to a client (z-relay zebroids).\nHere, z is the number of zebroids such that 0 \u2264 z < N, where\nN is the total number of cars. When z = 0 there are no carriers,\nrequiring a server to deliver the data item directly to the client.\nOtherwise, the chosen relay team of z zebroids hand over the data item\ntransitively to one another to arrive at the client, thereby reducing\navailability latency (see Section 3.1 for details). To increase\nrobustness, the dispatcher may employ multiple relay teams of z-carriers\nfor every request. This may be useful in scenarios where the\ndispatcher has lower prediction accuracy in the information about the\nroutes of the cars. Finally, storage constraints may require a zebroid\nto evict existing data items from its local storage to accommodate\nthe client requested item.\nIn this study, we quantify the following main factors that affect\navailability latency in the presence of zebroids: (i) data item\nrepository size, (ii) car density, (iii) storage capacity per car, (iv) client\ntrip duration, (v) replacement scheme employed by the zebroids,\nand (vi) accuracy of the car route predictions. For a significant\nsubset of these factors, we address some key questions pertaining to\nuse of zebroids both via analysis and extensive simulations.\nOur main findings are as follows. A naive random replacement\npolicy employed by the zebroids shows competitive performance\nin terms of availability latency. With such a policy, substantial\nimprovements in latency can be obtained with zebroids at a minimal\nreplacement overhead. In more practical scenarios, where the\ndispatcher has inaccurate information about the car routes, zebroids\ncontinue to provide latency improvements. A surprising result is\nthat changes in popularity of the data items do not impact the\nlatency gains obtained with a simple instantiation of z-relay zebroids\ncalled one-instantaneous zebroids (see Section 3.1). This study\nsuggests a number of interesting directions to be pursued to gain\nbetter understanding of design of carrier-based systems that\nimprove availability latency.\nRelated Work: Replication in mobile ad-hoc networks has been\na widely studied topic [11, 12, 15]. However, none of these\nstudies employ zebroids as data carriers to reduce the latency of the\nclient\"s requests. Several novel and important studies such as\nZebraNet [13], DakNet [14], Data Mules [16], Message Ferries [20],\nand Seek and Focus [17] have analyzed factors impacting\nintermittently connected networks consisting of data carriers similar in\nspirit to zebroids. Factors considered by each study are dictated by\ntheir assumed environment and target application. A novel\ncharacteristic of our study is the impact on availability latency for a\ngiven database repository of items. A detailed description of\nrelated works can be obtained in [9].\nThe rest of this paper is organized as follows. Section 2\nprovides an overview of the terminology along with the factors that\nimpact availability latency in the presence of zebroids. Section 3\ndescribes how the zebroids may be employed. Section 4 provides\ndetails of the analysis methodology employed to capture the\nperformance with zebroids. Section 5 describes the details of the\nsimulation environment used for evaluation. Section 6 enlists the key\nquestions examined in this study and answers them via analysis\nand simulations. Finally, Section 7 presents brief conclusions and\nfuture research directions.\n2. OVERVIEW AND TERMINOLOGY\nTable 1 summarizes the notation of the parameters used in the\npaper. Below we introduce some terminology used in the paper.\nAssume a network of N AutoMata-equipped cars, each with\nstorage capacity of \u03b1 bytes. The total storage capacity of the\nsystem is ST =N \u00b7\u03b1. There are T data items in the database, each with\nDatabase Parameters\nT Number of data items.\nSi Size of data item i\nfi Frequency of access to data item i.\nReplication Parameters\nRi Normalized frequency of access to data item i\nri Number of replicas for data item i\nn Characterizes a particular replication scheme.\n\u03b4i Average availability latency of data item i\n\u03b4agg Aggregate availability latency, \u03b4agg = T\nj=1 \u03b4j \u00b7 fj\nAutoMata System Parameters\nG Number of cells in the map (2D-torus).\nN Number of AutoMata devices in the system.\n\u03b1 Storage capacity per AutoMata.\n\u03b3 Trip duration of the client AutoMata.\nST Total storage capacity of the AutoMata system, ST = N \u00b7 \u03b1.\nTable 1: Terms and their definitions\nsize Si. The frequency of access to data item i is denoted as fi,\nwith T\nj=1 fj = 1. Let the trip duration of the client AutoMata\nunder consideration be \u03b3.\nWe now define the normalized frequency of access to the data\nitem i, denoted by Ri, as:\nRi =\n(fi)n\nT\nj=1(fj)n\n; 0 \u2264 n \u2264 \u221e (1)\nThe exponent n characterizes a particular replication technique.\nA square-root replication scheme is realized when n = 0.5 [5].\nThis serves as the base-line for comparison with the case when\nzebroids are deployed. Ri is normalized to a value between 0 and\n1. The number of replicas for data item i, denoted as ri, is: ri =\nmin (N, max (1, Ri\u00b7N\u00b7\u03b1\nSi\n)). This captures the case when at least\none copy of every data item must be present in the ad-hoc network\nat all times. In cases where a data item may be lost from the ad-hoc\nnetwork, this equation becomes ri = min (N, max (0, Ri\u00b7N\u00b7\u03b1\nSi\n)).\nIn this case, a request for the lost data item may need to be satisfied\nby fetching the item from a remote server.\nThe availability latency for a data item i, denoted as \u03b4i, is defined\nas the earliest time at which a client AutoMata will find the first\nreplica of the item accessible to it. If this condition is not satisfied,\nthen we set \u03b4i to \u03b3. This indicates that data item i was not available\nto the client during its journey. Note that since there is at least one\nreplica in the system for every data item i, by setting \u03b3 to a large\nvalue we ensure that the client\"s request for any data item i will be\nsatisfied. However, in most practical circumstances \u03b3 may not be\nso large as to find every data item.\nWe are interested in the availability latency observed across all\ndata items. Hence, we augment the average availability latency\nfor every data item i with its fi to obtain the following weighted\navailability latency (\u03b4agg) metric: \u03b4agg = T\ni=1 \u03b4i \u00b7 fi\nNext, we present our solution approach describing how zebroids\nare selected.\n3. SOLUTION APPROACH\n3.1 Zebroids\nWhen a client references a data item missing from its local\nstorage, the dispatcher identifies all cars with a copy of the data item\nas servers. Next, the dispatcher obtains the future routes of all cars\nfor a finite time duration equivalent to the maximum time the client\nis willing to wait for its request to be serviced. Using this\ninformation, the dispatcher schedules the quickest delivery path from any\nof the servers to the client using any other cars as intermediate\ncarriers. Hence, it determines the optimal set of forwarding decisions\n76\nthat will enable the data item to be delivered to the client in the\nminimum amount of time. Note that the latency along the quickest\ndelivery path that employs a relay team of z zebroids is similar to\nthat obtained with epidemic routing [19] under the assumptions of\ninfinite storage and no interference.\nA simple instantiation of z-relay zebroids occurs when z = 1\nand the client\"s request triggers a transfer of a copy of the requested\ndata item from a server to a zebroid in its vicinity. Such a\nzebroid is termed one-instantaneous zebroid. In some cases, the\ndispatcher might have inaccurate information about the routes of\nthe cars. Hence, a zebroid scheduled on the basis of this inaccurate\ninformation may not rendezvous with its target client. To minimize\nthe likelihood of such scenarios, the dispatcher may schedule\nmultiple zebroids. This may incur additional overhead due to redundant\nresource utilization to obtain the same latency improvements.\nThe time required to transfer a data item from a server to a\nzebroid depends on its size and the available link bandwidth. With\nsmall data items, it is reasonable to assume that this transfer time is\nsmall, especially in the presence of the high bandwidth data plane.\nLarge data items may be divided into smaller chunks enabling the\ndispatcher to schedule one or more zebroids to deliver each chunk\nto a client in a timely manner. This remains a future research\ndirection.\nInitially, number of replicas for each data item replicas might be\ncomputed using Equation 1. This scheme computes the number\nof data item replicas as a function of their popularity. It is static\nbecause number of replicas in the system do not change and no\nreplacements are performed. Hence, this is referred to as the\n\u2018nozebroids\" environment. We quantify the performance of the various\nreplacement policies with reference to this base-line that does not\nemploy zebroids.\nOne may assume a cold start phase, where initially only one or\nfew copies of every data item exist in the system. Many storage\nslots of the cars may be unoccupied. When the cars encounter one\nanother they construct new replicas of some selected data items to\noccupy the empty slots. The selection procedure may be to choose\nthe data items uniformly at random. New replicas are created as\nlong as a car has a certain threshold of its storage unoccupied.\nEventually, majority of the storage capacity of a car will be\nexhausted.\n3.2 Carrier-based Replacement policies\nThe replacement policies considered in this paper are reactive\nsince a replacement occurs only in response to a request issued for a\ncertain data item. When the local storage of a zebroid is completely\noccupied, it needs to replace one of its existing items to carry the\nclient requested data item. For this purpose, the zebroid must\nselect an appropriate candidate for eviction. This decision process\nis analogous to that encountered in operating system paging where\nthe goal is to maximize the cache hit ratio to prevent disk access\ndelay [18]. The carrier-based replacement policies employed in our\nstudy are Least Recently Used (LRU), Least Frequently Used\n(LFU) and Random (where a eviction candidate is chosen\nuniformly at random). We have considered local and global variants\nof the LRU/LFU policies which determine whether local or global\nknowledge of contents of the cars known at the dispatcher is used\nfor the eviction decision at a zebroid (see [9] for more details).\nThe replacement policies incur the following overheads. First,\nthe complexity associated with the implementation of a policy.\nSecond, the bandwidth used to transfer a copy of a data item from a\nserver to the zebroid. Third, the average number of replacements\nincurred by the zebroids. Note that in the no-zebroids case neither\noverhead is incurred.\nThe metrics considered in this study are aggregate availability\nlatency, \u03b4agg, percentage improvement in \u03b4agg with zebroids as\ncompared to the no-zebroids case and average number of replacements\nincurred per client request which is an indicator of the overhead\nincurred by zebroids.\nNote that the dispatchers with the help of the control plane may\nensure that no data item is lost from the system. In other words,\nat least one replica of every data item is maintained in the ad-hoc\nnetwork at all times. In such cases, even though a car may meet a\nrequesting client earlier than other servers, if its local storage\ncontains data items with only a single copy in the system, then such a\ncar is not chosen as a zebroid.\n4. ANALYSIS METHODOLOGY\nHere, we present the analytical evaluation methodology and some\napproximations as closed-form equations that capture the\nimprovements in availability latency that can be obtained with both\noneinstantaneous and z-relay zebroids. First, we present some\npreliminaries of our analysis methodology.\n\u2022 Let N be the number of cars in the network performing a 2D\nrandom walk on a\n\u221a\nG\u00d7\n\u221a\nG torus. An additional car serves\nas a client yielding a total of N + 1 cars. Such a mobility\nmodel has been used widely in the literature [17, 16] chiefly\nbecause it is amenable to analysis and provides a baseline\nagainst which performance of other mobility models can be\ncompared. Moreover, this class of Markovian mobility\nmodels has been used to model the movements of vehicles [3,\n21].\n\u2022 We assume that all cars start from the stationary distribution\nand perform independent random walks. Although for sparse\ndensity scenarios, the independence assumption does hold, it\nis no longer valid when N approaches G.\n\u2022 Let the size of data item repository of interest be T. Also,\ndata item i has ri replicas. This implies ri cars, identified as\nservers, have a copy of this data item when the client requests\nitem i.\nAll analysis results presented in this section are obtained\nassuming that the client is willing to wait as long as it takes for its request\nto be satisfied (unbounded trip duration \u03b3 = \u221e). With the random\nwalk mobility model on a 2D-torus, there is a guarantee that as\nlong as there is at least one replica of the requested data item in the\nnetwork, the client will eventually encounter this replica [2].\nExtensions to the analysis that also consider finite trip durations can\nbe obtained in [9].\nConsider a scenario where no zebroids are employed. In this\ncase, the expected availability latency for the data item is the\nexpected meeting time of the random walk undertaken by the client\nwith any of the random walks performed by the servers. Aldous et\nal. [2] show that the the meeting time of two random walks in such\na setting can be modelled as an exponential distribution with the\nmean C = c \u00b7 G \u00b7 log G, where the constant c 0.17 for G \u2265 25.\nThe meeting time, or equivalently the availability latency \u03b4i, for\nthe client requesting data item i is the time till it encounters any of\nthese ri replicas for the first time. This is also an exponential\ndistribution with the following expected value (note that this formulation\nis valid only for sparse cases when G >> ri): \u03b4i = cGlogG\nri\nThe aggregate availability latency without employing zebroids is\nthen this expression averaged over all data items, weighted by their\nfrequency of access:\n\u03b4agg(no \u2212 zeb) =\nT\ni=1\nfi \u00b7 c \u00b7 G \u00b7 log G\nri\n=\nT\ni=1\nfi \u00b7 C\nri\n(2)\n77\n4.1 One-instantaneous zebroids\nRecall that with one-instantaneous zebroids, for a given request,\na new replica is created on a car in the vicinity of the server,\nprovided this car meets the client earlier than any of the ri servers.\nMoreover, this replica is spawned at the time step when the client\nissues the request. Let Nc\ni be the expected total number of nodes\nthat are in the same cell as any of the ri servers. Then, we have\nNc\ni = (N \u2212 ri) \u00b7 (1 \u2212 (1 \u2212\n1\nG\n)ri\n) (3)\nIn the analytical model, we assume that Nc\ni new replicas are\ncreated, so that the total number of replicas is increased to ri +Nc\ni .\nThe availability latency is reduced since the client is more likely to\nmeet a replica earlier. The aggregated expected availability latency\nin the case of one-instantaneous zebroids is then given by,\n\u03b4agg(zeb) =\nT\ni=1\nfi \u00b7 c \u00b7 G \u00b7 log G\nri + Nc\ni\n=\nT\ni=1\nfi \u00b7 C\nri + Nc\ni\n(4)\nNote that in obtaining this expression, for ease of analysis, we\nhave assumed that the new replicas start from random locations\nin the torus (not necessarily from the same cell as the original ri\nservers). It thus treats all the Nc\ni carriers independently, just like\nthe ri original servers. As we shall show below by comparison\nwith simulations, this approximation provides an upper-bound on\nthe improvements that can be obtained because it results in a lower\nexpected latency at the client.\nIt should be noted that the procedure listed above will yield a\nsimilar latency to that employed by a dispatcher employing\noneinstantaneous zebroids (see Section 3.1). Since the dispatcher is\naware of all future car movements it would only transfer the\nrequested data item on a single zebroid, if it determines that the\nzebroid will meet the client earlier than any other server. This selected\nzebroid is included in the Nc\ni new replicas.\n4.2 z-relay zebroids\nTo calculate the expected availability latency with z-relay\nzebroids, we use a coloring problem analog similar to an approach\nused by Spyropoulos et al. [17]. Details of the procedure to obtain\na closed-form expression are given in [9]. The aggregate\navailability latency (\u03b4agg) with z-relay zebroids is given by,\n\u03b4agg(zeb) =\nT\ni=1\n[fi \u00b7\nC\nN + 1\n\u00b7\n1\nN + 1 \u2212 ri\n\u00b7\n(N \u00b7 log\nN\nri\n\u2212 log (N + 1 \u2212 ri))] (5)\n5. SIMULATION METHODOLOGY\nThe simulation environment considered in this study comprises\nof vehicles such as cars that carry a fraction of the data item\nrepository. A prediction accuracy parameter inherently provides a certain\nprobabilistic guarantee on the confidence of the car route\npredictions known at the dispatcher. A value of 100% implies that the\nexact routes of all cars are known at all times. A 70% value for this\nparameter indicates that the routes predicted for the cars will match\nthe actual ones with probability 0.7. Note that this probability is\nspread across the car routes for the entire trip duration. We now\nprovide the preliminaries of the simulation study and then describe\nthe parameter settings used in our experiments.\n\u2022 Similar to the analysis methodology, the map used is a 2D\ntorus. A Markov mobility model representing a unbiased 2D\nrandom walk on the surface of the torus describes the\nmovement of the cars across this torus.\n\u2022 Each grid/cell is a unique state of this Markov chain. In each\ntime slot, every car makes a transition from a cell to any of\nits neighboring 8 cells. The transition is a function of the\ncurrent location of the car and a probability transition matrix\nQ = [qij] where qij is the probability of transition from state\ni to state j. Only AutoMata equipped cars within the same\ncell may communicate with each other.\n\u2022 The parameters \u03b3, \u03b4 have been discretized and expressed in\nterms of the number of time slots.\n\u2022 An AutoMata device does not maintain more than one replica\nof a data item. This is because additional replicas occupy\nstorage without providing benefits.\n\u2022 Either one-instantaneous or z-relay zebroids may be employed\nper client request for latency improvement.\n\u2022 Unless otherwise mentioned, the prediction accuracy\nparameter is assumed to be 100%. This is because this study\naims to quantify the effect of a large number of parameters\nindividually on availability latency.\nHere, we set the size of every data item, Si, to be 1. \u03b1 represents\nthe number of storage slots per AutoMata. Each storage slot stores\none data item. \u03b3 represents the duration of the client\"s journey in\nterms of the number of time slots. Hence the possible values of\navailability latency are between 0 and \u03b3. \u03b4 is defined as the number\nof time slots after which a client AutoMata device will encounter a\nreplica of the data item for the first time. If a replica for the data\nitem requested was encountered by the client in the first cell then\nwe set \u03b4 = 0. If \u03b4 > \u03b3 then we set \u03b4 = \u03b3 indicating that no copy\nof the requested data item was encountered by the client during its\nentire journey. In all our simulations, for illustration we consider a\n5 \u00d7 5 2D-torus with \u03b3 set to 10. Our experiments indicate that the\ntrends in the results scale to maps of larger size.\nWe simulated a skewed distribution of access to the T data items\nthat obeys Zipf\"s law with a mean of 0.27. This distribution is\nshown to correspond to sale of movie theater tickets in the United\nStates [6]. We employ a replication scheme that allocates replicas\nfor a data item as a function of the square-root of the frequency of\naccess of that item. The square-root replication scheme is shown\nto have competitive latency performance over a large parameter\nspace [8]. The data item replicas are distributed uniformly across\nthe AutoMata devices. This serves as the base-line no-zebroids\ncase. The square-root scheme also provides the initial replica\ndistribution when zebroids are employed. Note that the replacements\nperformed by the zebroids will cause changes to the data item replica\ndistribution. Requests generated as per the Zipf distribution are\nissued one at a time. The client car that issues the request is chosen\nin a round-robin manner. After a maximum period of \u03b3, the latency\nencountered by this request is recorded.\nIn all the simulation results, each point is an average of 200,000\nrequests. Moreover, the 95% confidence intervals determined for\nthe results are quite tight for the metrics of latency and\nreplacement overhead. Hence, we only present them for the metric that\ncaptures the percentage improvement in latency with respect to the\nno-zebroids case.\n6. RESULTS\nIn this section, we describe our evaluation results where the\nfollowing key questions are addressed. With a wide choice of\nreplacement schemes available for a zebroid, what is their effect on\navailability latency? A more central question is: Do zebroids provide\n78\n0 20 40 60 80 100\n1.5\n2\n2.5\n3\n3.5\nNumber of cars\nAggregate availability latency (\u03b4\nagg\n)\nlru_global\nlfu_global\nlru_local\nlfu_local\nrandom\nFigure 1: Figure 1 shows the availability latency when\nemploying one-instantaneous zebroids as a function of (N,\u03b1) values,\nwhen the total storage in the system is kept fixed, ST = 200.\nsignificant improvements in availability latency? What is the\nassociated overhead incurred in employing these zebroids? What\nhappens to these improvements in scenarios where a dispatcher may\nhave imperfect information about the car routes? What inherent\ntrade-offs exist between car density and storage per car with\nregards to their combined as well as individual effect on availability\nlatency in the presence of zebroids? We present both simple\nanalysis and detailed simulations to provide answers to these questions\nas well as gain insights into design of carrier-based systems.\n6.1 How does a replacement scheme employed\nby a zebroid impact availability latency?\nFor illustration, we present \u2018scale-up\" experiments where\noneinstantaneous zebroids are employed (see Figure 1). By scale-up,\nwe mean that \u03b1 and N are changed proportionally to keep the total\nsystem storage, ST , constant. Here, T = 50 and ST = 200. We\nchoose the following values of (N,\u03b1) = {(20,10), (25,8), (50,4),\n(100,2)}. The figure indicates that a random replacement scheme\nshows competitive performance. This is because of several reasons.\nRecall that the initial replica distribution is set as per the\nsquareroot replication scheme. The random replacement scheme does not\nalter this distribution since it makes replacements blind to the\npopularity of a data item. However, the replacements cause dynamic\ndata re-organization so as to better serve the currently active\nrequest. Moreover, the mobility pattern of the cars is random, hence,\nthe locations from which the requests are issued by clients are also\nrandom and not known a priori at the dispatcher. These findings\nare significant because a random replacement policy can be\nimplemented in a simple decentralized manner.\nThe lru-global and lfu-global schemes provide a latency\nperformance that is worse than random. This is because these\npolicies rapidly develop a preference for the more popular data items\nthereby creating a larger number of replicas for them. During\neviction, the more popular data items are almost never selected as a\nreplacement candidate. Consequently, there are fewer replicas for\nthe less popular items. Hence, the initial distribution of the data\nitem replicas changes from square-root to that resembling linear\nreplication. The higher number of replicas for the popular data\nitems provide marginal additional benefits, while the lower number\nof replicas for the other data items hurts the latency performance of\nthese global policies. The lfu-local and lru-local schemes have\nsimilar performance to random since they do not have enough history\nof local data item requests. We speculate that the performance of\nthese local policies will approach that of their global variants for a\nlarge enough history of data item requests. On account of the\ncompetitive performance shown by a random policy, for the remainder\nof the paper, we present the performance of zebroids that employ a\nrandom replacement policy.\n6.2 Do zebroids provide significant\nimprovements in availability latency?\nWe find that in many scenarios employing zebroids provides\nsubstantial improvements in availability latency.\n6.2.1 Analysis\nWe first consider the case of one-instantaneous zebroids.\nFigure 2.a shows the variation in \u03b4agg as a function of N for T = 10\nand \u03b1 = 1 with a 10 \u00d7 10 torus using Equation 4. Both the x and y\naxes are drawn to a log-scale. Figure 2.b show the % improvement\nin \u03b4agg obtained with one-instantaneous zebroids. In this case, only\nthe x-axis is drawn to a log-scale. For illustration, we assume that\nthe T data items are requested uniformly.\nInitially, when the network is sparse the analytical approximation\nfor improvements in latency with zebroids, obtained from\nEquations 2 and 4, closely matches the simulation results. However, as\nN increases, the sparseness assumption for which the analysis is\nvalid, namely N << G, is no longer true. Hence, the two curves\nrapidly diverge. The point at which the two curves move away from\neach other corresponds to a value of \u03b4agg \u2264 1. Moreover, as\nmentioned earlier, the analysis provides an upper bound on the latency\nimprovements, as it treats the newly created replicas given by Nc\ni\nindependently. However, these Nc\ni replicas start from the same cell\nas one of the server replicas ri. Finally, the analysis captures a\noneshot scenario where given an initial data item replica distribution,\nthe availability latency is computed. The new replicas created do\nnot affect future requests from the client.\nOn account of space limitations, here, we summarize the\nobservations in the case when z-relay zebroids are employed. The\ninterested reader can obtain further details in [9]. Similar observations,\nlike the one-instantaneous zebroid case, apply since the simulation\nand analysis curves again start diverging when the analysis\nassumptions are no longer valid. However, the key observation here is that\nthe latency improvement with z-relay zebroids is significantly\nbetter than the one-instantaneous zebroids case, especially for lower\nstorage scenarios. This is because in sparse scenarios, the\ntransitive hand-offs between the zebroids creates higher number of\nreplicas for the requested data item, yielding lower availability latency.\nMoreover, it is also seen that the simulation validation curve for the\nimprovements in \u03b4agg with z-relay zebroids approaches that of the\none-instantaneous zebroid case for higher storage (higher N\nvalues). This is because one-instantaneous zebroids are a special case\nof z-relay zebroids.\n6.2.2 Simulation\nWe conduct simulations to examine the entire storage spectrum\nobtained by changing car density N or storage per car \u03b1 to also\ncapture scenarios where the sparseness assumptions for which the\nanalysis is valid do not hold. We separate the effect of N and \u03b1\nby capturing the variation of N while keeping \u03b1 constant (case\n1) and vice-versa (case 2) both with z-relay and one-instantaneous\nzebroids. Here, we set the repository size as T = 25. Figure 3\ncaptures case 1 mentioned above. Similar trends are observed with\ncase 2, a complete description of those results are available in [9].\nWith Figure 3.b, keeping \u03b1 constant, initially increasing car\ndensity has higher latency benefits because increasing N introduces\nmore zebroids in the system. As N is further increased, \u03c9 reduces\nbecause the total storage in the system goes up. Consequently, the\nnumber of replicas per data item goes up thereby increasing the\n79\nnumber of servers. Hence, the replacement policy cannot find a\nzebroid as often to transport the requested data item to the client\nearlier than any of the servers. On the other hand, the increased\nnumber of servers benefits the no-zebroids case in bringing \u03b4agg\ndown. The net effect results in reduction in \u03c9 for larger values of\nN.\n10\n1\n10\n2\n10\n3\n10\n\u22121\n10\n0\n10\n1\n10\n2\nNumber of cars\nno\u2212zebroidsanal\nno\u2212zebroids\nsim\none\u2212instantaneous\nanal\none\u2212instantaneoussim\nAggregate Availability latency (\u03b4agg\n)\n2.a) \u03b4agg\n10\n1\n10\n2\n10\n3\n0\n10\n20\n30\n40\n50\n60\n70\n80\n90\n100\nNumber of cars\n% Improvement in \u03b4agg\nwrt no\u2212zebroids (\u03c9)\nanalytical upper\u2212bound\nsimulation\n2.b) \u03c9\nFigure 2: Figure 2 shows the latency performance with\noneinstantaneous zebroids via simulations along with the\nanalytical approximation for a 10 \u00d7 10 torus with T = 10.\nThe trends mentioned above are similar to that obtained from the\nanalysis. However, somewhat counter-intuitively with relatively\nhigher system storage, z-relay zebroids provide slightly lower\nimprovements in latency as compared to one-instantaneous zebroids.\nWe speculate that this is due to the different data item replica\ndistributions enforced by them. Note that replacements performed by\nthe zebroids cause fluctuations in these replica distributions which\nmay effect future client requests. We are currently exploring\nsuitable choices of parameters that can capture these changing replica\ndistributions.\n6.3 What is the overhead incurred with\nimprovements in latency with zebroids?\nWe find that the improvements in latency with zebroids are\nobtained at a minimal replacement overhead (< 1 per client request).\n6.3.1 Analysis\nWith one-instantaneous zebroids, for each client request a\nmaximum of one zebroid is employed for latency improvement. Hence,\nthe replacement overhead per client request can amount to a\nmaximum of one. Recall that to calculate the latency with one-instantaneous\n0 50 100 150 200 250 300 350 400\n0\n1\n2\n3\n4\n5\n6\nNumber of cars\nAggregate availability latency (\u03b4agg\n)\nno\u2212zebroids\none\u2212instantaneous\nz\u2212relays\n3.a\n0 50 100 150 200 250 300 350 400\n0\n10\n20\n30\n40\n50\n60\nNumber of cars\n% Improvement in \u03b4agg\nwrt no\u2212zebroids (\u03c9)\none\u2212instantaneous\nz\u2212relays\n3.b\nFigure 3: Figure 3 depicts the latency performance with both\none-instantaneous and z-relay zebroids as a function of the car\ndensity when \u03b1 = 2 and T = 25.\nzebroids, Nc\ni new replicas are created in the same cell as the servers.\nNow a replacement is only incurred if one of these Nc\ni newly\ncreated replicas meets the client earlier than any of the ri servers.\nLet Xri and XNc\ni\nrespectively be random variables that capture\nthe minimum time till any of the ri and Nc\ni replicas meet the client.\nSince Xri and XNc\ni\nare assumed to be independent, by the property\nof exponentially distributed random variables we have,\nOverhead/request = 1 \u00b7 P(XNc\ni\n< Xri ) + 0 \u00b7 P(Xri \u2264 XNc\ni\n)\n(6)\nOverhead/request =\nri\nC\nri\nC\n+\nNc\ni\nC\n=\nri\nri + Nc\ni\n(7)\nRecall that the number of replicas for data item i, ri, is a function\nof the total storage in the system i.e., ri = k\u00b7N \u00b7\u03b1 where k satisfies\nthe constraint 1 \u2264 ri \u2264 N. Using this along with Equation 2, we\nget\nOverhead/request = 1 \u2212\nG\nG + N \u00b7 (1 \u2212 k \u00b7 \u03b1)\n(8)\nNow if we keep the total system storage N \u00b7 \u03b1 constant since\nG and T are also constant, increasing N increases the replacement\noverhead. However, if N \u00b7\u03b1 is constant then increasing N causes \u03b1\n80\n0 20 40 60 80 100\n0\n0.05\n0.1\n0.15\n0.2\n0.25\n0.3\n0.35\n0.4\n0.45\n0.5\nNumber of cars\none\u2212instantaneous\nzebroids\nAverage number of replacements per request\n(N=20,\u03b1=10)\n(N=25,\u03b1=8)\n(N=50,\u03b1=4)\n(N=100,\u03b1=2)\nFigure 4: Figure 4 captures replacement overhead when\nemploying one-instantaneous zebroids as a function of (N,\u03b1)\nvalues, when the total storage in the system is kept fixed, ST =\n200.\nto go down. This implies that a higher replacement overhead is\nincurred for higher N and lower \u03b1 values. Moreover, when ri = N,\nthis means that every car has a replica of data item i. Hence, no\nzebroids are employed when this item is requested, yielding an\noverhead/request for this item as zero. Next, we present\nsimulation results that validate our analysis hypothesis for the overhead\nassociated with deployment of one-instantaneous zebroids.\n6.3.2 Simulation\nFigure 4 shows the replacement overhead with one-instantaneous\nzebroids when (N,\u03b1) are varied while keeping the total system\nstorage constant. The trends shown by the simulation are in agreement\nwith those predicted by the analysis above. However, the total\nsystem storage can be changed either by varying car density (N) or\nstorage per car (\u03b1). On account of similar trends, here we present\nthe case when \u03b1 is kept constant and N is varied (Figure 5). We\nrefer the reader to [9] for the case when \u03b1 is varied and N is held\nconstant.\nWe present an intuitive argument for the behavior of the\nperrequest replacement overhead curves. When the storage is extremely\nscarce so that only one replica per data item exists in the AutoMata\nnetwork, the number of replacements performed by the zebroids is\nzero since any replacement will cause a data item to be lost from\nthe system. The dispatcher ensures that no data item is lost from\nthe system. At the other end of the spectrum, if storage becomes\nso abundant that \u03b1 = T then the entire data item repository can\nbe replicated on every car. The number of replacements is again\nzero since each request can be satisfied locally. A similar scenario\noccurs if N is increased to such a large value that another car with\nthe requested data item is always available in the vicinity of the\nclient. However, there is a storage spectrum in the middle where\nreplacements by the scheduled zebroids result in improvements in\n\u03b4agg (see Figure 3).\nMoreover, we observe that for sparse storage scenarios, the higher\nimprovements with z-relay zebroids are obtained at the cost of a\nhigher replacement overhead when compared to the one-instantaneous\nzebroids case. This is because in the former case, each of these z\nzebroids selected along the lowest latency path to the client needs\nto perform a replacement. However, the replacement overhead is\nstill less than 1 indicating that on an average less than one\nreplacement per client request is needed even when z-relay zebroids are\nemployed.\n0 50 100 150 200 250 300 350 400\n0\n0.1\n0.2\n0.3\n0.4\n0.5\n0.6\n0.7\n0.8\n0.9\n1\nNumber of cars\nz\u2212relays\none\u2212instantaneous\nAverage number of replacements per request\nFigure 5: Figure 5 shows the replacement overhead with\nzebroids for the cases when N is varied keeping \u03b1 = 2.\n10 20 30 40 50 60 70 80 90 100\n0\n0.5\n1\n1.5\n2\n2.5\n3\n3.5\n4\nPrediction percentage\nno\u2212zebroids (N=50)\none\u2212instantaneous (N=50)\nz\u2212relays (N=50)\nno\u2212zebroids (N=200)\none\u2212instantaneous (N=200) z\u2212relays (N=200)\nAggregate Availability Latency (\u03b4agg\n)\nFigure 6: Figure 6 shows \u03b4agg for different car densities as a\nfunction of the prediction accuracy metric with \u03b1 = 2 and T =\n25.\n6.4 What happens to the availability latency\nwith zebroids in scenarios with\ninaccuracies in the car route predictions?\nWe find that zebroids continue to provide improvements in\navailability latency even with lower accuracy in the car route\npredictions. We use a single parameter p to quantify the accuracy of the\ncar route predictions.\n6.4.1 Analysis\nSince p represents the probability that a car route predicted by the\ndispatcher matches the actual one, hence, the latency with zebroids\ncan be approximated by,\n\u03b4err\nagg = p \u00b7 \u03b4agg(zeb) + (1 \u2212 p) \u00b7 \u03b4agg(no \u2212 zeb) (9)\n\u03b4err\nagg = p \u00b7 \u03b4agg(zeb) + (1 \u2212 p) \u00b7\nC\nri\n(10)\nExpressions for \u03b4agg(zeb) can be obtained from Equations 4\n(one-instantaneous) or 5 (z-relay zebroids).\n6.4.2 Simulation\nFigure 6 shows the variation in \u03b4agg as a function of this route\nprediction accuracy metric. We observe a smooth reduction in the\n81\nimprovement in \u03b4agg as the prediction accuracy metric reduces. For\nzebroids that are scheduled but fail to rendezvous with the client\ndue to the prediction error, we tag any such replacements made by\nthe zebroids as failed. It is seen that failed replacements gradually\nincrease as the prediction accuracy reduces.\n6.5 Under what conditions are the\nimprovements in availability latency with zebroids\nmaximized?\nSurprisingly, we find that the improvements in latency obtained\nwith one-instantaneous zebroids are independent of the input\ndistribution of the popularity of the data items.\n6.5.1 Analysis\nThe fractional difference (labelled \u03c9) in the latency between the\nno-zebroids and one-instantaneous zebroids is obtained from\nequations 2, 3, and 4 as\n\u03c9 =\nT\ni=1\nfi\u00b7C\nri\n\u2212 T\ni=1\nfi\u00b7C\nri+(N\u2212ri)\u00b7(1\u2212(1\u2212 1\nG )ri\n)\nT\ni=1\nfi\u00b7C\nri\n(11)\nHere C = c\u00b7G\u00b7log G. This captures the fractional improvement\nin the availability latency obtained by employing one-instantaneous\nzebroids. Let \u03b1 = 1, making the total storage in the system ST =\nN. Assuming the initial replica distribution is as per the\nsquareroot replication scheme, we have, ri =\n\u221a\nfi\u00b7N\nT\nj=1\n\u221a\nfj\n. Hence, we get\nfi =\nK2\n\u00b7r2\ni\nN2 , where K = T\nj=1 fj. Using this, along with the\napproximation (1 \u2212 x)n\n1 \u2212 n \u00b7 x for small x, we simplify the\nabove equation to get, \u03c9 = 1 \u2212\nT\ni=1\nri\n1+\nN\u2212ri\nG\nT\ni=1 ri\nIn order to determine when the gains with one-instantaneous\nzebroids are maximized, we can frame an optimization problem as\nfollows: Maximize \u03c9, subject to T\ni=1 ri = ST\nTHEOREM 1. With a square-root replication scheme,\nimprovements obtained with one-instantaneous zebroids are independent\nof the input popularity distribution of the data items. (See [9] for\nproof)\n6.5.2 Simulation\nWe perform simulations with two different frequency\ndistribution of data items: Uniform and Zipfian (with mean= 0.27).\nSimilar latency improvements with one-instantaneous zebroids are\nobtained in both cases. This result has important implications. In\ncases with biased popularity toward certain data items, the\naggregate improvements in latency across all data item requests still\nremain the same. Even in scenarios where the frequency of access\nto the data items changes dynamically, zebroids will continue to\nprovide similar latency improvements.\n7. CONCLUSIONS AND\nFUTURE RESEARCH DIRECTIONS\nIn this study, we examined the improvements in latency that can\nbe obtained in the presence of carriers that deliver a data item from\na server to a client. We quantified the variation in availability\nlatency as a function of a rich set of parameters such as car density,\nstorage per car, title database size, and replacement policies\nemployed by zebroids.\nBelow we summarize some key future research directions we\nintend to pursue. To better reflect reality we would like to validate the\nobservations obtained from this study with some real world\nsimulation traces of vehicular movements (for example using\nCORSIM [1]). This will also serve as a validation for the utility of the\nMarkov mobility model used in this study. We are currently\nanalyzing the performance of zebroids on a real world data set\ncomprising of an ad-hoc network of buses moving around a small\nneighborhood in Amherst [4]. Zebroids may also be used for delivery\nof data items that carry delay sensitive information with a certain\nexpiry. Extensions to zebroids that satisfy such application\nrequirements presents an interesting future research direction.\n8. ACKNOWLEDGMENTS\nThis research was supported in part by an Annenberg fellowship and NSF\ngrants numbered CNS-0435505 (NeTS NOSS), CNS-0347621 (CAREER),\nand IIS-0307908.\n9. REFERENCES\n[1] Federal Highway Administration. Corridor simulation. Version 5.1,\nhttp://www.ops.fhwa.dot.gov/trafficanalysistools/cors im.htm.\n[2] D. Aldous and J. Fill. Reversible markov chains and random walks\non graphs. Under preparation.\n[3] A. Bar-Noy, I. Kessler, and M. Sidi. Mobile Users: To Update or Not\nto Update. In IEEE Infocom, 1994.\n[4] J. Burgess, B. Gallagher, D. Jensen, and B. Levine. MaxProp:\nRouting for Vehicle-Based Disruption-Tolerant Networking. In IEEE\nInfocom, April 2006.\n[5] E. Cohen and S. Shenker. Replication Strategies in Unstructured\nPeer-to-Peer Networks. In SIGCOMM, 2002.\n[6] A. Dan, D. Dias, R. Mukherjee, D. Sitaram, and R. Tewari. Buffering\nand Caching in Large-Scale Video Servers. In COMPCON, 1995.\n[7] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. PAVAN: A\nPolicy Framework for Content Availabilty in Vehicular ad-hoc\nNetworks. In VANET, New York, NY, USA, 2004. ACM Press.\n[8] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari.\nComparison of Replication Strategies for Content Availability in\nC2P2 networks. In MDM, May 2005.\n[9] S. Ghandeharizadeh, S. Kapadia, and B. Krishnamachari. An\nEvaluation of Availability Latency in Carrier-based Vehicular ad-hoc\nNetworks. Technical report, Department of Computer Science,\nUniversity of Southern California,CENG-2006-1, 2006.\n[10] S. Ghandeharizadeh and B. Krishnamachari. C2p2: A peer-to-peer\nnetwork for on-demand automobile information services. In Globe.\nIEEE, 2004.\n[11] T. Hara. Effective Replica Allocation in ad-hoc Networks for\nImproving Data Accessibility. In IEEE Infocom, 2001.\n[12] H. Hayashi, T. Hara, and S. Nishio. A Replica Allocation Method\nAdapting to Topology Changes in ad-hoc Networks. In DEXA, 2005.\n[13] P. Juang, H. Oki, Y. Wang, M. Martonosi, L. Peh, and D. Rubenstein.\nEnergy-efficient computing for wildlife tracking: design tradeoffs\nand early experiences with ZebraNet. SIGARCH Comput. Archit.\nNews, 2002.\n[14] A. Pentland, R. Fletcher, and A. Hasson. DakNet: Rethinking\nConnectivity in Developing Nations. Computer, 37(1):78-83, 2004.\n[15] F. Sailhan and V. Issarny. Cooperative Caching in ad-hoc Networks.\nIn MDM, 2003.\n[16] R. Shah, S. Roy, S. Jain, and W. Brunette. Data mules: Modeling and\nanalysis of a three-tier architecture for sparse sensor networks.\nElsevier ad-hoc Networks Journal, 1, September 2003.\n[17] T. Spyropoulos, K. Psounis, and C. Raghavendra. Single-Copy\nRouting in Intermittently Connected Mobile Networks. In SECON,\nApril 2004.\n[18] A. Tanenbaum. Modern Operating Systems, 2nd Edition, Chapter 4,\nSection 4.4 . Prentice Hall, 2001.\n[19] A. Vahdat and D. Becker. Epidemic routing for partially-connected\nad-hoc networks. Technical report, Department of Computer Science,\nDuke University, 2000.\n[20] W. Zhao, M. Ammar, and E. Zegura. A message ferrying approach\nfor data delivery in sparse mobile ad-hoc networks. In MobiHoc,\npages 187-198, New York, NY, USA, 2004. ACM Press.\n[21] M. Zonoozi and P. Dassanayake. User Mobility Modeling and\nCharacterization of Mobility Pattern. IEEE Journal on Selected\nAreas in Communications, 15:1239-1252, September 1997.\n82", "keywords": "naive random replacement policy;zebroid;vehicular network;termed zebroid;audio and video clip;mobility;car density;storage per device;datum carrier;latency;simplified instantiation of zebroid;automaton;availability latency;repository size;markov model;zebroid simplified instantiation;mobile device;peer-to-peer vehicular ad-hoc network;replacement policy"} {"name": "train_C-69", "title": "pTHINC: A Thin-Client Architecture for Mobile Wireless Web", "abstract": "Although web applications are gaining popularity on mobile wireless PDAs, web browsers on these systems can be quite slow and often lack adequate functionality to access many web sites. We have developed pTHINC, a PDA thinclient solution that leverages more powerful servers to run full-function web browsers and other application logic, then sends simple screen updates to the PDA for display. pTHINC uses server-side screen scaling to provide high-fidelity display and seamless mobility across a broad range of different clients and screen sizes, including both portrait and landscape viewing modes. pTHINC also leverages existing PDA control buttons to improve system usability and maximize available screen resolution for application display. We have implemented pTHINC on Windows Mobile and evaluated its performance on mobile wireless devices. Our results compared to local PDA web browsers and other thin-client approaches demonstrate that pTHINC provides superior web browsing performance and is the only PDA thin client that effectively supports crucial browser helper applications such as video playback.", "fulltext": "1. INTRODUCTION\nThe increasing ubiquity of wireless networks and\ndecreasing cost of hardware is fueling a proliferation of mobile\nwireless handheld devices, both as standalone wireless Personal\nDigital Assistants (PDA) and popular integrated PDA/cell\nphone devices. These devices are enabling new forms of\nmobile computing and communication. Service providers are\nleveraging these devices to deliver pervasive web access, and\nmobile web users already often use these devices to access\nweb-enabled information such as news, email, and localized\ntravel guides and maps. It is likely that within a few years,\nmost of the devices accessing the web will be mobile.\nUsers typically access web content by running a web browser\nand associated helper applications locally on the PDA.\nAlthough native web browsers exist for PDAs, they deliver\nsubpar performance and have a much smaller feature set\nand more limited functionality than their desktop\ncomputing counterparts [10]. As a result, PDA web browsers are\noften not able to display web content from web sites that\nleverage more advanced web technologies to deliver a richer web\nexperience. This fundamental problem arises for two\nreasons. First, because PDAs have a completely different\nhardware/software environment from traditional desktop\ncomputers, web applications need to be rewritten and customized\nfor PDAs if at all possible, duplicating development costs.\nBecause the desktop application market is larger and more\nmature, most development effort generally ends up being\nspent on desktop applications, resulting in greater\nfunctionality and performance than their PDA counterparts.\nSecond, PDAs have a more resource constrained environment\nthan traditional desktop computers to provide a smaller\nform factor and longer battery life. Desktop web browsers\nare large, complex applications that are unable to run on a\nPDA. Instead, developers are forced to significantly strip\ndown these web browsers to provide a usable PDA web\nbrowser, thereby crippling PDA browser functionality.\nThin-client computing provides an alternative approach\nfor enabling pervasive web access from handheld devices.\nA thin-client computing system consists of a server and a\nclient that communicate over a network using a remote\ndisplay protocol. The protocol enables graphical displays to be\nvirtualized and served across a network to a client device,\nwhile application logic is executed on the server. Using the\nremote display protocol, the client transmits user input to\nthe server, and the server returns screen updates of the\napplications from the server to the client. Using a thin-client\nmodel for mobile handheld devices, PDAs can become\nsimple stateless clients that leverage the remote server\ncapabilities to execute web browsers and other helper applications.\nThe thin-client model provides several important\nbenefits for mobile wireless web. First, standard desktop web\napplications can be used to deliver web content to PDAs\nwithout rewriting or adapting applications to execute on\na PDA, reducing development costs and leveraging existing\nsoftware investments. Second, complex web applications can\nbe executed on powerful servers instead of running stripped\ndown versions on more resource constrained PDAs,\nproviding greater functionality and better performance [10]. Third,\nweb applications can take advantage of servers with faster\nnetworks and better connectivity, further boosting\napplication performance. Fourth, PDAs can be even simpler\ndevices since they do not need to perform complex application\nlogic, potentially reducing energy consumption and\nextend143\ning battery life. Finally, PDA thin clients can be essentially\nstateless appliances that do not need to be backed up or\nrestored, require almost no maintenance or upgrades, and do\nnot store any sensitive data that can be lost or stolen. This\nmodel provides a viable avenue for medical organizations to\ncomply with HIPAA regulations [6] while embracing mobile\nhandhelds in their day to day operations.\nDespite these potential advantages, thin clients have been\nunable to provide the full range of these benefits in delivering\nweb applications to mobile handheld devices. Existing thin\nclients were not designed for PDAs and do not account for\nimportant usability issues in the context of small form factor\ndevices, resulting in difficulty in navigating displayed web\ncontent. Furthermore, existing thin clients are ineffective at\nproviding seamless mobility across the heterogeneous mix\nof device display sizes and resolutions. While existing thin\nclients can already provide faster performance than native\nPDA web browsers in delivering HTML web content, they\ndo not effectively support more display-intensive web helper\napplications such as multimedia video, which is increasingly\nan integral part of available web content.\nTo harness the full potential of thin-client computing in\nproviding mobile wireless web on PDAs, we have developed\npTHINC (PDA THin-client InterNet Computing). pTHINC\nbuilds on our previous work on THINC [1] to provide a\nthinclient architecture for mobile handheld devices. pTHINC\nvirtualizes and resizes the display on the server to efficiently\ndeliver high-fidelity screen updates to a broad range of\ndifferent clients, screen sizes, and screen orientations, including\nboth portrait and landscape viewing modes. This enables\npTHINC to provide the same persistent web session across\ndifferent client devices. For example, pTHINC can provide\nthe same web browsing session appropriately scaled for\ndisplay on a desktop computer and a PDA so that the same\ncookies, bookmarks, and other meta-data are continuously\navailable on both machines simultaneously. pTHINC\"s\nvirtual display approach leverages semantic information\navailable in display commands, and client-side video hardware to\nprovide more efficient remote display mechanisms that are\ncrucial for supporting more display-intensive web\napplications. Given limited display resolution on PDAs, pTHINC\nmaximizes the use of screen real estate for remote display\nby moving control functionality from the screen to readily\navailable PDA control buttons, improving system usability.\nWe have implemented pTHINC on Windows Mobile and\ndemonstrated that it works transparently with existing\napplications, window systems, and operating systems, and does\nnot require modifying, recompiling, or relinking existing\nsoftware. We have quantitatively evaluated pTHINC against\nlocal PDA web browsers and other thin-client approaches on\nPocket PC devices. Our experimental results demonstrate\nthat pTHINC provides superior web browsing performance\nand is the only PDA thin client that effectively supports\ncrucial browser helper applications such as video playback.\nThis paper presents the design and implementation of\npTHINC. Section 2 describes the overall usage model and\nusability characteristics of pTHINC. Section 3 presents the\ndesign and system architecture of pTHINC. Section 4 presents\nexperimental results measuring the performance of pTHINC\non web applications and comparing it against native PDA\nbrowsers and other popular PDA thin-client systems.\nSection 5 discusses related work. Finally, we present some\nconcluding remarks.\n2. PTHINC USAGE MODEL\npTHINC is a thin-client system that consists of a simple\nclient viewer application that runs on the PDA and a server\nthat runs on a commodity PC. The server leverages more\npowerful PCs to to run web browsers and other application\nlogic. The client takes user input from the PDA stylus and\nvirtual keyboard and sends them to the server to pass to\nthe applications. Screen updates are then sent back from\nthe server to the client for display to the user.\nWhen the pTHINC PDA client is started, the user is\npresented with a simple graphical interface where information\nsuch as server address and port, user authentication\ninformation, and session settings can be provided. pTHINC first\nattempts to connect to the server and perform the\nnecessary handshaking. Once this process has been completed,\npTHINC presents the user with the most recent display of\nhis session. If the session does not exist, a new session is\ncreated. Existing sessions can be seamlessly continued without\nchanges in the session setting or server configuration.\nUnlike other thin-client systems, pTHINC provides a user\nwith a persistent web session model in which a user can\nlaunch a session running a web browser and associated\napplications at the server, then disconnect from that session\nand reconnect to it again anytime. When a user reconnects\nto the session, all of the applications continue running where\nthe user left off, so that the user can continue working as\nthough he or she never disconnected. The ability to\ndisconnect and reconnect to a session at anytime is an important\nbenefit for mobile wireless PDA users which may have\nintermittent network connectivity.\npTHINC\"s persistent web session model enables a user to\nreconnect to a web session from devices other than the one\non which the web session was originally initiated. This\nprovides users with seamless mobility across different devices.\nIf a user loses his PDA, he can easily use another PDA to\naccess his web session. Furthermore, pTHINC allows users\nto use non-PDA devices to access web sessions as well. A\nuser can access the same persistent web session on a\ndesktop PC as on a PDA, enabling a user to use the same web\nsession from any computer.\npTHINC\"s persistent web session model addresses a key\nproblem encountered by mobile web users, the lack of a\ncommon web environment across computers. Web browsers\noften store important information such as bookmarks, cookies,\nand history, which enable them to function in a much more\nuseful manner. The problem that occurs when a user moves\nbetween computers is that this data, which is specific to a\nweb browser installation, cannot move with the user.\nFurthermore, web browsers often need helper applications to\nprocess different media content, and those applications may\nnot be consistently available across all computers. pTHINC\naddresses this problem by enabling a user to remotely use\nthe exact same web browser environment and helper\napplications from any computer. As a result, pTHINC can\nprovide a common, consistent web browsing environment for\nmobile users across different devices without requiring them\nto attempt to repeatedly synchronize different web browsing\nenvironments across multiple machines.\nTo enable a user to access the same web session on\ndifferent devices, pTHINC must provide mechanisms to\nsupport different display sizes and resolutions. Toward this end,\npTHINC provides a zoom feature that enables a user to\nzoom in and out of a display and allows the display of a web\n144\nFigure 1: pTHINC shortcut keys\nsession to be resized to fit the screen of the device being\nused. For example, if the server is running a web session at\n1024\u00d7768 but the client is a PDA with a display resolution\nof 640\u00d7480, pTHINC will resize the desktop display to fit\nthe full display in the smaller screen of the PDA. pTHINC\nprovides the PDA user with the option to increase the size\nof the display by zooming in to different parts of the display.\nUsers are often familiar with the general layout of commonly\nvisited websites, and are able to leverage this resizing\nfeature to better navigate through web pages. For example,\na user can zoom out of the display to view the entire page\ncontent and navigate hyperlinks, then zoom in to a region\nof interest for a better view.\nTo enable a user to access the same web session on\ndifferent devices, pTHINC must also provide mechanisms to\nsupport different display orientations. In a desktop\nenvironment, users are typically accustomed to having displays\npresented in landscape mode where the screen width is larger\nthan its height. However, in a PDA environment, the choice\nis not always obvious. Some users may prefer having the\ndisplay in portrait mode, as it is easier to hold the device\nin their hands, while others may prefer landscape mode in\norder to minimize the amount of side-scrolling necessary\nto view a web page. To accommodate PDA user\npreferences, pTHINC provides an orientation feature that enables\nit to seamless rotate the display between landscape and\nportrait mode. The landscape mode is particularly useful for\npTHINC users who frequently access their web sessions on\nboth desktop and PDA devices, providing those users with\nthe same familiar landscape setting across different devices.\nBecause screen space is a relatively scarce resource on\nPDAs, pTHINC runs in fullscreen mode to maximize the\nscreen area available to display the web session. To be able\nto use all of the screen on the PDA and still allow the user\nto control and interact with it, pTHINC reuses the typical\nshortcut buttons found on PDAs to perform all the\ncontrol functions available to the user. The buttons used by\npTHINC do not require any OS environment changes; they\nare simply intercepted by the pTHINC client application\nwhen they are pressed. Figure 1 shows how pTHINC\nutilizes the shortcut buttons to provide easy navigation and\nimprove the overall user experience. These buttons are not\ndevice specific, and the layout shown is common to\nwidelyused PocketPC devices. pTHINC provides six shortcuts to\nsupport its usage model:\n\u2022 Rotate Screen: The record button on the left edge is\nused to rotate the screen between portrait and\nlandscape mode each time the button is pressed.\n\u2022 Zoom Out: The leftmost button on the bottom front\nis used to zoom out the display of the web session\nproviding a bird\"s eye view of the web session.\n\u2022 Zoom In: The second leftmost button on the bottom\nfront is used to zoom in the display of the web session\nto more clearly view content of interest.\n\u2022 Directional Scroll: The middle button on the bottom\nfront is used to scroll around the display using a single\ncontrol button in a way that is already familiar to PDA\nusers. This feature is particularly useful when the user\nhas zoomed in to a region of the display such that only\npart of the display is visible on the screen.\n\u2022 Show/Hide Keyboard: The second rightmost button on\nthe bottom front is used to bring up a virtual keyboard\ndrawn on the screen for devices which have no physical\nkeyboard. The virtual keyboard uses standard PDA\nOS mechanisms, providing portability across different\nPDA environments.\n\u2022 Close Session: The rightmost button on the bottom\nfront is used to disconnect from the pTHINC session.\npTHINC uses the PDA touch screen, stylus, and standard\nuser interface mechanisms to provide a user interface\npointand-click metaphor similar to that provided by the mouse\nin a traditional desktop computing environment. pTHINC\ndoes not use a cursor since PDA environments do not\nprovide one. Instead, a user can use the stylus to tap on\ndifferent sections of the touch screen to indicate input focus. A\nsingle tap on the touch screen generates a corresponding\nsingle click mouse event. A double tap on the touch screen\ngenerates a corresponding double click mouse event. pTHINC\nprovides two-button mouse emulation by using the stylus to\npress down on the screen for one second to generate a right\nmouse click. All of these actions are identical to the way\nusers already interact with PDA applications in the common\nPocketPC environment. In web browsing, users can click on\nhyperlinks and focus on input boxes by simply tapping on\nthe desired screen area of interest. Unlike local PDA web\nbrowsers and other PDA applications, pTHINC leverages\nmore powerful desktop user interface metaphors to enable\nusers to manipulate multiple open application windows\ninstead of being limited to a single application window at any\ngiven moment. This provides increased browsing flexibility\nbeyond what is currently available on PDA devices. Similar\nto a desktop environment, browser windows and other\napplication windows can be moved around by pressing down\nand dragging the stylus similar to a mouse.\n3. PTHINC SYSTEM ARCHITECTURE\npTHINC builds on the THINC [1] remote display\narchitecture to provide a thin-client system for PDAs. pTHINC\nvirtualizes the display at the server by leveraging the video\ndevice abstraction layer, which sits below the window server\nand above the framebuffer. This is a well-defined, low-level,\ndevice-dependent layer that exposes the video hardware to\nthe display system. pTHINC accomplishes this through a\nsimple virtual display driver that intercepts drawing\ncommands, packetizes, and sends them over the network.\n145\nWhile other thin-client approaches intercept display\ncommands at other layers of the display subsystem, pTHINC\"s\ndisplay virtualization approach provides some key benefits\nin efficiently supporting PDA clients. For example,\nintercepting display commands at a higher layer between\napplications and the window system as is done by X [17] requires\nreplicating and running a great deal of functionality on the\nPDA that is traditionally provided by the desktop window\nsystem. Given both the size and complexity of traditional\nwindow systems, attempting to replicate this functionality\nin the restricted PDA environment would have proven to\nbe a daunting, and perhaps unfeasible task. Furthermore,\napplications and the window system often require tight\nsynchronization in their operation and imposing a wireless\nnetwork between them by running the applications on the server\nand the window system on the client would significantly\ndegrade performance. On the other hand, intercepting at a\nlower layer by extracting pixels out of the framebuffer as\nthey are rendered provides a simple solution that requires\nvery little functionality on the PDA client, but can also\nresult in degraded performance. The reason is that by the\ntime the remote display server attempts to send screen\nupdates, it has lost all semantic information that may have\nhelped it encode efficiently, and it must resort to using a\ngeneric and expensive encoding mechanism on the server,\nas well as a potentially expensive decoding mechanism on\nthe limited PDA client. In contrast to both the high and\nlow level interception approaches, pTHINC\"s approach of\nintercepting at the device driver provides an effective\nbalance between client and server simplicity, and the ability to\nefficiently encode and decode screen updates.\nBy using a low-level virtual display approach, pTHINC\ncan efficiently encode application display commands using\nonly a small set of low-level commands. In a PDA\nenvironment, this set of commands provides a crucial component\nin maintaining the simplicity of the client in the\nresourceconstrained PDA environment. The display commands are\nshown in Table 1, and work as follows. COPY instructs the\nclient to copy a region of the screen from its local framebuffer\nto another location. This command improves the user\nexperience by accelerating scrolling and opaque window\nmovement without having to resend screen data from the server.\nSFILL, PFILL, and BITMAP are commands that paint a\nfixed-size region on the screen. They are useful for\naccelerating the display of solid window backgrounds, desktop\npatterns, backgrounds of web pages, text drawing, and\ncertain operations in graphics manipulation programs. SFILL\nfills a sizable region on the screen with a single color. PFILL\nreplicates a tile over a screen region. BITMAP performs a\nfill using a bitmap of ones and zeros as a stipple to apply\na foreground and background color. Finally, RAW is used\nto transmit unencoded pixel data to be displayed verbatim\non a region of the screen. This command is invoked as a\nlast resort if the server is unable to employ any other\ncommand, and it is the only command that may be compressed\nto mitigate its impact on network bandwidth.\npTHINC delivers its commands using a non-blocking,\nserverpush update mechanism, where as soon as display updates\nare generated on the server, they are sent to the client.\nClients are not required to explicitly request display\nupdates, thus minimizing the impact that the typical\nvarying network latency of wireless links may have on the\nresponsiveness of the system. Keeping in mind that resource\nCommand Description\nCOPY Copy a frame buffer area to specified\ncoordinates\nSFILL Fill an area with a given pixel color value\nPFILL Tile an area with a given pixel pattern\nBITMAP Fill a region using a bit pattern\nRAW Display raw pixel data at a given location\nTable 1: pTHINC Protocol Display Commands\nconstrained PDAs and wireless networks may not be able\nto keep up with a fast server generating a large number of\nupdates, pTHINC is able to coalesce, clip, and discard\nupdates automatically if network loss or congestion occurs, or\nthe client cannot keep up with the rate of updates. This\ntype of behavior proves crucial in a web browsing\nenvironment, where for example, a page may be redrawn multiple\ntimes as it is rendered on the fly by the browser. In this\ncase, the PDA will only receive and render the final result,\nwhich clearly is all the user is interesting in seeing.\npTHINC prioritizes the delivery of updates to the PDA\nusing a Shortest-Remaining-Size-First (SRSF) preemptive\nupdate scheduler. SRSF is analogous to\nShortest-RemainingProcessing-Time scheduling, which is known to be optimal\nfor minimizing mean response time in an interactive system.\nIn a web browsing environment, short jobs are associated\nwith text and basic page layout components such as the\npage\"s background, which are critical web content for the\nuser. On the other hand, large jobs are often lower priority\nbeautifying elements, or, even worse, web page banners\nand advertisements, which are of questionable value to the\nuser as he or she is browsing the page. Using SRSF, pTHINC\nis able to maximize the utilization of the relatively scarce\nbandwidth available on the wireless connection between the\nPDA and the server.\n3.1 Display Management\nTo enable users to just as easily access their web browser\nand helper applications from a desktop computer at home\nas from a PDA while on the road, pTHINC provides a\nresize mechanism to zoom in and out of the display of a web\nsession. pTHINC resizing is completely supported by the\nserver, not the client. The server resamples updates to fit\nwithin the PDAs viewport before they are transmitted over\nthe network. pTHINC uses Fant\"s resampling algorithm to\nresize pixel updates. This provides smooth, visually\npleasing updates with properly antialiasing and has only modest\ncomputational requirements.\npTHINC\"s resizing approach has a number of advantages.\nFirst, it allows the PDA to leverage the vastly superior\ncomputational power of the server to use high quality resampling\nalgorithms and produce higher quality updates for the PDA\nto display. Second, resizing the screen does not translate into\nadditional resource requirements for the PDA, since it does\nnot need to perform any additional work. Finally, better\nutilization of the wireless network is attained since rescaling\nthe updates reduces their bandwidth requirements.\nTo enable users to orient their displays on a PDA to\nprovide a viewing experience that best accommodates user\npreferences and the layout of web pages or applications,\npTHINC provides a display rotation mechanism to switch\nbetween landscape and portrait viewing modes. pTHINC\ndisplay rotation is completely supported by the client, not\nthe server. pTHINC does not explicitly recalculate the\nge146\nometry of display updates to perform rotation, which would\nbe computationally expensive. Instead, pTHINC simply\nchanges the way data is copied into the framebuffer to switch\nbetween display modes. When in portrait mode, data is\ncopied along the rows of the framebuffer from left to right.\nWhen in landscape mode, data is copied along the columns\nof the framebuffer from top to bottom. These very fast and\nsimple techniques replace one set of copy operations with\nanother and impose no performance overhead. pTHINC\nprovides its own rotation mechanism to support a wide range of\ndevices without imposing additional feature requirements on\nthe PDA. Although some newer PDA devices provide native\nsupport for different orientations, this mechanism is not\ndynamic and requires the user to rotate the PDA\"s entire user\ninterface before starting the pTHINC client. Windows\nMobile provides native API mechanisms for PDA applications\nto rotate their UI on the fly, but these mechanisms deliver\npoor performance and display quality as the rotation is\nperformed naively and is not completely accurate.\n3.2 Video Playback\nVideo has gradually become an integral part of the World\nWide Web, and its presence will only continue to increase.\nWeb sites today not only use animated graphics and flash\nto deliver web content in an attractive manner, but also\nutilize streaming video to enrich the web interface. Users are\nable to view pre-recorded and live newscasts on CNN, watch\nsports highlights on ESPN, and even search through large\ncollection of videos on Google Video. To allow applications\nto provide efficient video playback, interfaces have been\ncreated in display systems that allow video device drivers to\nexpose their hardware capabilities back to the applications.\npTHINC takes advantage of these interfaces and its virtual\ndevice driver approach to provide a virtual bridge between\nthe remote client and its hardware and the applications, and\ntransparently support video playback.\nOn top of this architecture, pTHINC uses the YUV\ncolorspace to encode the video content, which provides a\nnumber of benefits. First, it has become increasingly common\nfor PDA video hardware to natively support YUV and be\nable to perform the colorspace conversion and scaling\nautomatically. As a result, pTHINC is able to provide fullscreen\nvideo playback without any performance hits. Second, the\nuse of YUV allows for a more efficient representation of RGB\ndata without loss of quality, by taking advantage of the\nhuman eye\"s ability to better distinguish differences in\nbrightness than in color. In particular, pTHINC uses the YV12\nformat, which allows full color RGB data to be encoded\nusing just 12 bits per pixel. Third, YUV data is produced\nas one of the last steps of the decoding process of most\nvideo codecs, allowing pTHINC to provide video playback\nin a manner that is format independent. Finally, even if the\nPDA\"s video hardware is unable to accelerate playback, the\ncolorspace conversion process is simple enough that it does\nnot impose unreasonable requirements on the PDA.\nA more concrete example of how pTHINC leverages the\nPDA video hardware to support video playback can be seen\nin our prototype implementation on the popular Dell Axim\nX51v PDA, which is equipped with the Intel 2700G\nmultimedia accelerator. In this case, pTHINC creates an\noffscreen buffer in video memory and writes and reads from\nthis memory region data on the YV12 format. When a new\nvideo frame arrives, video data is copied from the buffer to\nFigure 2: Experimental Testbed\nan overlay surface in video memory, which is independent\nof the normal surface used for traditional drawing. As the\nYV12 data is put onto the overlay, the Intel accelerator\nautomatically performs both colorspace conversion and scaling.\nBy using the overlay surface, pTHINC has no need to redraw\nthe screen once video playback is over since the overlapped\nsurface is unaffected. In addition, specific overlay regions\ncan be manipulated by leveraging the video hardware, for\nexample to perform hardware linear interpolation to smooth\nout the frame and display it fullscreen, and to do automatic\nrotation when the client runs in landscape mode.\n4. EXPERIMENTAL RESULTS\nWe have implemented a pTHINC prototype that runs the\nclient on widely-used Windows Mobile-based Pocket PC\ndevices and the server on both Windows and Linux operating\nsystems. To demonstrate its effectiveness in supporting\nmobile wireless web applications, we have measured its\nperformance on web applications. We present experimental results\non different PDA devices for two popular web applications,\nbrowsing web pages and playing video content from the web.\nWe compared pTHINC against native web applications\nrunning locally on the PDA to demonstrate the improvement\nthat pTHINC can provide over the traditional fat-client\napproach. We also compared pTHINC against three of the\nmost widely used thin clients that can run on PDAs, Citrix\nMeta-FrameXP [2], Microsoft Remote Desktop [3] and VNC\n(Virtual Network Computing) [16]. We follow common\npractice and refer to Citrix MetaFrameXP and Microsoft Remote\nDesktop by their respective remote display protocols, ICA\n(Independent Computing Architecture) and RDP (Remote\nDesktop Protocol).\n4.1 Experimental Testbed\nWe conducted our web experiments using two different\nwireless Pocket PC PDAs in an isolated Wi-Fi network\ntestbed, as shown in Figure 2. The testbed consisted of two\nPDA client devices, a packet monitor, a thin-client server,\nand a web server. Except for the PDAs, all of the other\nmachines were IBM Netfinity 4500R servers with dual 933 MHz\nIntel PIII CPUs and 512 MB RAM and were connected on\na switched 100 Mbps FastEthernet network. The web server\nused was Apache 1.3.27, the network emulator was\nNISTNet 2.0.12, and the packet monitor was Ethereal 0.10.9. The\nPDA clients connected to the testbed through a 802.11b\nLucent Orinoco AP-2000 wireless access point. All experiments\nusing the wireless network were conducted within ten feet\nof the access point, so we considered the amount of packet\nloss to be negligible in our experiments.\nTwo Pocket PC PDAs were used to provide results across\nboth older, less powerful models and newer higher\nperformance models. The older model was a Dell Axim X5 with\n147\nClient 1024\u00d7768 640\u00d7480 Depth Resize Clip\nRDP no yes 8-bit no yes\nVNC yes yes 16-bit no no\nICA yes yes 16-bit yes no\npTHINC yes yes 24-bit yes no\nTable 2: Thin-client Testbed Configuration Setting\na 400 MHz Intel XScale PXA255 CPU and 64 MB RAM\nrunning Windows Mobile 2003 and a Dell TrueMobile 1180\n2.4Ghz CompactFlash card for wireless networking. The\nnewer model was a Dell Axim X51v with a 624 MHz Intel\nXScale XPA270 CPU and 64 MB RAM running Windows\nMobile 5.0 and integrated 802.11b wireless networking. The\nX51v has an Intel 2700G multimedia accelerator with 16MB\nvideo memory. Both PDAs are capable of 16-bit color but\nhave different screen sizes and display resolutions. The X5\nhas a 3.5 inch diagonal screen with 240\u00d7320 resolution. The\nX51v has a 3.7 inch diagonal screen with 480\u00d7640.\nThe four thin clients that we used support different\nlevels of display quality as summarized in Table 2. The RDP\nclient only supports a fixed 640\u00d7480 display resolution on\nthe server with 8-bit color depth, while other platforms\nprovide higher levels of display quality. To provide a fair\ncomparison across all platforms, we conducted our experiments\nwith thin-client sessions configured for two possible\nresolutions, 1024\u00d7768 and 640\u00d7480. Both ICA and VNC were\nconfigured to use the native PDA resolution of 16-bit color\ndepth. The current pTHINC prototype uses 24-bit color\ndirectly and the client downsamples updates to the 16-bit color\ndepth available on the PDA. RDP was configured using only\n8-bit color depth since it does not support any better color\ndepth. Since both pTHINC and ICA provide the ability to\nview the display resized to fit the screen, we measured both\nclients with and without the display resized to fit the PDA\nscreen. Each thin client was tested using landscape rather\nthan portrait mode when available. All systems run on the\nX51v could run in landscape mode because the hardware\nprovides a landscape mode feature. However, the X5 does\nnot provide this functionality. Only pTHINC directly\nsupports landscape mode, so it was the only system that could\nrun in landscape mode on both the X5 and X51v.\nTo provide a fair comparison, we also standardized on\ncommon hardware and operating systems whenever possible.\nAll of the systems used the Netfinity server as the thin-client\nserver. For the two systems designed for Windows servers,\nICA and RDP, we ran Windows 2003 Server on the server.\nFor the other systems which support X-based servers, VNC\nand pTHINC, we ran the Debian Linux Unstable\ndistribution with the Linux 2.6.10 kernel on the server. We used the\nlatest thin-client server versions available on each platform\nat the time of our experiments, namely Citrix MetaFrame\nXP Server for Windows Feature Release 3, Microsoft\nRemote Desktop built into Windows XP and Windows 2003\nusing RDP 5.2, and VNC 4.0.\n4.2 Application Benchmarks\nWe used two web application benchmarks for our\nexperiments based on two common application scenarios, browsing\nweb pages and playing video content from the web. Since\nmany thin-client systems including two of the ones tested\nare closed and proprietary, we measured their performance\nin a noninvasive manner by capturing network traffic with\na packet monitor and using a variant of slow-motion\nbenchmarking [13] previously developed to measure thin-client\nperformance in PDA environments [10]. This measurement\nmethodology accounts for both the display decoupling that\ncan occur between client and server in thin-client systems\nas well as client processing time, which may be significant\nin the case of PDAs.\nTo measure web browsing performance, we used a web\nbrowsing benchmark based on the Web Text Page Load Test\nfrom the Ziff-Davis i-Bench benchmark suite [7]. The\nbenchmark consists of JavaScript controlled load of 55 pages from\nthe web server. The pages contain both text and\ngraphics with pages varying in size. The graphics are embedded\nimages in GIF and JPEG formats. The original i-Bench\nbenchmark was modified for slow-motion benchmarking by\nintroducing delays of several seconds between the pages\nusing JavaScript. Then two tests were run, one where\ndelays where added between each page, and one where pages\nwhere loaded continuously without waiting for them to be\ndisplayed on the client. In the first test, delays were\nsufficiently adjusted in each case to ensure that each page could\nbe received and displayed on the client completely without\ntemporal overlap in transferring the data belonging to two\nconsecutive pages. We used the packet monitor to record\nthe packet traffic for each run of the benchmark, then used\nthe timestamps of the first and last packet in the trace to\nobtain our latency measures [10]. The packet monitor also\nrecorded the amount of data transmitted between the client\nand the server. The ratio between the data traffic in the two\ntests yields a scale factor. This scale factor shows the loss\nof data between the server and the client due to inability of\nthe client to process the data quickly enough. The product\nof the scale factor with the latency measurement produces\nthe true latency accounting for client processing time.\nTo run the web browsing benchmark, we used Mozilla\nFirefox 1.0.4 running on the thin-client server for the thin\nclients, and Windows Internet Explorer (IE) Mobile for 2003\nand Mobile for 5.0 for the native browsers on the X5 and\nX51v PDAs, respectively. In all cases, the web browser used\nwas sized to fill the entire display region available.\nTo measure video playback performance, we used a video\nbenchmark that consisted of playing a 34.75s MPEG-1 video\nclip containing a mix of news and entertainment\nprogramming at full-screen resolution. The video clip is 5.11 MB and\nconsists of 834 352x240 pixel frames with an ideal frame rate\nof 24 frames/sec. We measured video performance using\nslow-motion benchmarking by monitoring resulting packet\ntraffic at two playback rates, 1 frames/second (fps) and 24\nfps, and comparing the results to determine playback\ndelays and frame drops that occur at 24 fps to measure overall\nvideo quality [13]. For example, 100% quality means that all\nvideo frames were played at real-time speed. On the other\nhand, 50% quality could mean that half the video data was\ndropped, or that the clip took twice as long to play even\nthough all of the video data was displayed.\nTo run the video benchmark, we used Windows Media\nPlayer 9 for Windows-based thin-client servers, MPlayer 1.0\npre 6 for X-based thin-client servers, and Windows Media\nPlayer 9 Mobile and 10 Mobile for the native video players\nrunning locally on the X5 and X51v PDAs, respectively. In\nall cases, the video player used was sized to fill the entire\ndisplay region available.\n4.3 Measurements\nFigures 3 and 4 show the results of running the web\nbrows148\n0\n1\n10\n100\npTHINC\nResized\npTHINCICA\nResized\nICAVNCRDPLOCAL\nLatency(s)\nPlatform\nAxim X5 (640x480 or less)\nAxim X51v (640x480)\nAxim X5 (1024x768)\nAxim X51v (1024x768)\nFigure 3: Browsing Benchmark: Average Page Latency\ning benchmark. For each platform, we show results for up to\nfour different configurations, two on the X5 and two on the\nX51v, depending on whether each configuration was\nsupported. However, not all platforms could support all\nconfigurations. The local browser only runs at the display\nresolution of the PDA, 480\u00d7680 or less for the X51v and the\nX5. RDP only runs at 640\u00d7480. Neither platform could\nsupport 1024\u00d7768 display resolution. ICA only ran on the\nX5 and could not run on the X51v because it did not work\non Windows Mobile 5.\nFigure 3 shows the average latency per web page for each\nplatform. pTHINC provides the lowest average web\nbrowsing latency on both PDAs. On the X5, pTHINC performs\nup to 70 times better than other thin-client systems and 8\ntimes better than the local browser. On the X51v, pTHINC\nperforms up to 80 times better than other thin-client\nsystems and 7 times better than the native browser. In fact,\nall of the thin clients except VNC outperform the local\nPDA browser, demonstrating the performance benefits of\nthe thin-client approach. Usability studies have shown that\nweb pages should take less than one second to download\nfor the user to experience an uninterrupted web browsing\nexperience [14]. The measurements show that only the thin\nclients deliver subsecond web page latencies. In contrast, the\nlocal browser requires more than 3 seconds on average per\nweb page. The local browser performs worse since it needs\nto run a more limited web browser to process the HTML,\nJavaScript, and do all the rendering using the limited\ncapabilities of the PDA. The thin clients can take advantage of\nfaster server hardware and a highly tuned web browser to\nprocess the web content much faster.\nFigure 3 shows that RDP is the next fastest platform after\npTHINC. However, RDP is only able to run at a fixed\nresolution of 640\u00d7480 and 8-bit color depth. Furthermore, RDP\nalso clips the display to the size of the PDA screen so that\nit does not need to send updates that are not visible on the\nPDA screen. This provides a performance benefit\nassuming the remaining web content is not viewed, but degrades\nperformance when a user scrolls around the display to view\nother web content. RDP achieves its performance with\nsignificantly lower display quality compared to the other thin\nclients and with additional display clipping not used by other\nsystems. As a result, RDP performance alone does not\nprovide a complete comparison with the other platforms. In\ncontrast, pTHINC provides the fastest performance while\nat the same time providing equal or better display quality\nthan the other systems.\n0\n1\n10\n100\n1000\npTHINC\nResized\npTHINCICA\nResized\nICAVNCRDPLOCAL\nDataSize(KB)\nPlatform\nAxim X5 (640x480 or less)\nAxim X51v (640x480)\nAxim X5 (1024x768)\nAxim X51v (1024x768)\nFigure 4: Browsing Benchmark: Average Page Data\nTransferred\nSince VNC and ICA provide similar display quality to\npTHINC, these systems provide a more fair comparison of\ndifferent thin-client approaches. ICA performs worse in part\nbecause it uses higher-level display primitives that require\nadditional client processing costs. VNC performs worse in\npart because it loses display data due to its client-pull\ndelivery mechanism and because of the client processing costs\nin decompressing raw pixel primitives. In both cases, their\nperformance was limited in part because their PDA clients\nwere unable to keep up with the rate at which web pages\nwere being displayed.\nFigure 3 also shows measurements for those thin clients\nthat support resizing the display to fit the PDA screen,\nnamely ICA and pTHINC. Resizing requires additional\nprocessing, which results in slower average web page latencies.\nThe measurements show that the additional delay incurred\nby ICA when resizing versus not resizing is much more\nsubstantial than for pTHINC. ICA performs resizing on the\nslower PDA client. In contrast, pTHINC leverage the more\npowerful server to do resizing, reducing the performance\ndifference between resizing and not resizing. Unlike ICA,\npTHINC is able to provide subsecond web page download\nlatencies in both cases.\nFigure 4 shows the data transferred in KB per page when\nrunning the slow-motion version of the tests. All of the\nplatforms have modest data transfer requirements of roughly\n100 KB per page or less. This is well within the\nbandwidth capacity of Wi-Fi networks. The measurements show\nthat the local browser does not transfer the least amount of\ndata. This is surprising as HTML is often considered to be\na very compact representation of content. Instead, RDP is\nthe most bandwidth efficient platform, largely as a result of\nusing only 8-bit color depth and screen clipping so that it\ndoes not transfer the entire web page to the client. pTHINC\noverall has the largest data requirements, slightly more than\nVNC. This is largely a result of the current pTHINC\nprototype\"s lack of native support for 16-bit color data in the wire\nprotocol. However, this result also highlights pTHINC\"s\nperformance as it is faster than all other systems even while\ntransferring more data. Furthermore, as newer PDA models\nsupport full 24-bit color, these results indicate that pTHINC\nwill continue to provide good web browsing performance.\nSince display usability and quality are as important as\nperformance, Figures 5 to 8 compare screenshots of the\ndifferent thin clients when displaying a web page, in this case\nfrom the popular BBC news website. Except for ICA, all of\nthe screenshots were taken on the X51v in landscape mode\n149\nFigure 5: Browser Screenshot: RDP 640x480 Figure 6: Browser Screenshot: VNC 1024x768\nFigure 7: Browser Screenshot: ICA Resized 1024x768 Figure 8: Browser Screenshot: pTHINC Resized 1024x768\nusing the maximum display resolution settings for each\nplatform given in Table 2. The ICA screenshot was taken on the\nX5 since ICA does not run on the X51v. While the\nscreenshots lack the visual fidelity of the actual device display,\nseveral observations can be made. Figure 5 shows that RDP\ndoes not support fullscreen mode and wastes lots of screen\nspace for controls and UI elements, requiring the user to\nscroll around in order to access the full contents of the web\nbrowsing session. Figure 6 shows that VNC makes better\nuse of the screen space and provides better display quality,\nbut still forces the user to scroll around to view the web\npage due to its lack of resizing support. Figure 7 shows\nICA\"s ability to display the full web page given its resizing\nsupport, but that its lack of landscape capability and poorer\nresize algorithm significantly compromise display quality. In\ncontrast, Figure 8 shows pTHINC using resizing to provide\na high quality fullscreen display of the full width of the web\npage. pTHINC maximizes the entire viewing region by\nmoving all controls to the PDA buttons. In addition, pTHINC\nleverages the server computational power to use a high\nquality resizing algorithm to resize the display to fit the PDA\nscreen without significant overhead.\nFigures 9 and 10 show the results of running the video\nplayback benchmark. For each platform except ICA, we\nshow results for an X5 and X51v configuration. ICA could\nnot run on the X51v as noted earlier. The measurements\nwere done using settings that reflected the environment a\nuser would have to access a web session from both a\ndesktop computer and a PDA. As such, a 1024\u00d7768 server\ndisplay resolution was used whenever possible and the video\nwas shown at fullscreen. RDP was limited to 640\u00d7480\ndisplay resolution as noted earlier. Since viewing the entire\nvideo display is the only really usable option, we resized\nthe display to fit the PDA screen for those platforms that\nsupported this feature, namely ICA and pTHINC.\nFigure 9 shows the video quality for each platform. pTHINC\nis the only thin client able to provide perfect video playback\nquality, similar to the native PDA video player. All of the\nother thin clients deliver very poor video quality. With the\nexception of RDP on the X51v which provided unacceptable\n35% video quality, none of the other systems were even able\nto achieve 10% video quality. VNC and ICA have the worst\nquality at 8% on the X5 device.\npTHINC\"s native video support enables superior video\nperformance, while other thin clients suffer from their\ninability to distinguish video from normal display updates.\nThey attempt to apply ineffective and expensive\ncompression algorithms on the video data and are unable to keep up\nwith the stream of updates generated, resulting in dropped\nframes or long playback times. VNC suffers further from\nits client-pull update model because video frames are\ngenerated faster than the rate at which the client can process\nand send requests to the server to obtain the next display\nupdate. Figure 10 shows the total data transferred during\n150\n0%\n20%\n40%\n60%\n80%\n100%\npTHINCICAVNCRDPLOCAL\nVideoQuality\nPlatform\nAxim X5\nAxim X51v\nFigure 9: Video Benchmark: Fullscreen Video Quality\n0\n1\n10\n100\npTHINCICAVNCRDPLOCAL\nVideoDataSize(MB)\nPlatform\nAxim X5\nAxim X51v\nFigure 10: Video Benchmark: Fullscreen Video Data\nvideo playback for each system. The native player is the\nmost bandwidth efficient platform, sending less than 6 MB\nof data, which corresponds to about 1.2 Mbps of bandwidth.\npTHINC\"s 100% video quality requires about 25 MB of data\nwhich corresponds to a bandwidth usage of less than 6 Mbps.\nWhile the other thin clients send less data than THINC,\nthey do so because they are dropping video data, resulting\nin degraded video quality.\nFigures 11 to 14 compare screenshots of the different thin\nclients when displaying the video clip. Except for ICA, all of\nthe screenshots were taken on the X51v in landscape mode\nusing the maximum display resolution settings for each\nplatform given in Table 2. The ICA screenshot was taken on the\nX5 since ICA does not run on the X51v. Figures 11 and 12\nshow that RDP and VNC are unable to display the entire\nvideo frame on the PDA screen. RDP wastes screen space\nfor UI elements and VNC only shows the top corner of the\nvideo frame on the screen. Figure 13 shows that ICA\nprovides resizing to display the entire video frame, but did not\nproportionally resize the video data, resulting in strange\ndisplay artifacts. In contrast, Figure 14 shows pTHINC using\nresizing to provide a high quality fullscreen display of the\nentire video frame. pTHINC provides visually more appealing\nvideo display than RDP, VNC, or ICA.\n5. RELATED WORK\nSeveral studies have examined the web browsing\nperformance of thin-client computing [13, 19, 10]. The ability for\nthin clients to improve web browsing performance on\nwireless PDAs was first quantitatively demonstrated in a\nprevious study by one of the authors [10]. This study\ndemonstrated that thin clients can provide both faster web\nbrowsing performance and greater web browsing functionality.\nThe study considered a wide range of web content including\ncontent from medical information systems. Our work builds\non this previous study and consider important issues such as\nhow usable existing thin clients are in PDA environments,\nthe trade-offs between thin-client usability and performance,\nperformance across different PDA devices, and the\nperformance of thin clients on common web-related applications\nsuch as video.\nMany thin clients have been developed and some have\nPDA clients, including Microsoft\"s Remote Desktop [3],\nCitrix MetraFrame XP [2], Virtual Network Computing [16,\n12], GoToMyPC [5], and Tarantella [18]. These systems\nwere first designed for desktop computing and retrofitted\nfor PDAs. Unlike pTHINC, they do not address key\nsystem architecture and usability issues important for PDAs.\nThis limits their display quality, system performance,\navailable screen space, and overall usability on PDAs. pTHINC\nbuilds on previous work by two of the authors on THINC [1],\nextending the server architecture and introducing a client\ninterface and usage model to efficiently support PDA devices\nfor mobile web applications.\nOther approaches to improve the performance of mobile\nwireless web browsing have focused on using transcoding\nand caching proxies in conjunction with the fat client model\n[11, 9, 4, 8]. They work by pushing functionality to external\nproxies, and using specialized browsing applications on the\nPDA device that communicate with the proxy. Our\nthinclient approach differs fundamentally from these fat-client\napproaches by pushing all web browser logic to the server,\nleveraging existing investments in desktop web browsers and\nhelper applications to work seamlessly with production\nsystems without any additional proxy configuration or web\nbrowser modifications.\nWith the emergence of web browsing on small display\ndevices, web sites have been redesigned using mechanisms like\nWAP and specialized native web browsers have been\ndeveloped to tailor the needs of these devices. Recently, Opera\nhas developed the Opera Mini [15] web browser, which uses\nan approach similar to the thin-client model to provide\naccess across a number of mobile devices that would normally\nbe incapable of running a web browser. Instead of requiring\nthe device to process web pages, it uses a remote server to\npre-process the page before sending it to the phone.\n6. CONCLUSIONS\nWe have introduced pTHINC, a thin-client architecture\nfor wireless PDAs. pTHINC provides key architectural and\nusability mechanisms such as server-side screen resizing,\nclientside screen rotation using simple copy techniques, YUV video\nsupport, and maximizing screen space for display updates\nand leveraging existing PDA control buttons for UI\nelements. pTHINC transparently supports traditional\ndesktop browsers and their helper applications on PDA devices\nand desktop machines, providing mobile users with\nubiquitous access to a consistent, personalized, and full-featured\nweb environment across heterogeneous devices. We have\nimplemented pTHINC and measured its performance on\nweb applications compared to existing thin-client systems\nand native web applications. Our results on multiple\nmobile wireless devices demonstrate that pTHINC delivers web\nbrowsing performance up to 80 times better than existing\nthin-client systems, and 8 times better than a native PDA\nbrowser. In addition, pTHINC is the only PDA thin client\n151\nFigure 11: Video Screenshot: RDP 640x480 Figure 12: Video Screenshot: VNC 1024x768\nFigure 13: Video Screenshot: ICA Resized 1024x768\nFigure 14: Video Screenshot: pTHINC Resized 1024x768\nthat transparently provides full-screen, full frame rate video\nplayback, making web sites with multimedia content\naccessible to mobile web users.\n7. ACKNOWLEDGEMENTS\nThis work was supported in part by NSF ITR grants\nCCR0219943 and CNS-0426623, and an IBM SUR Award.\n8. REFERENCES\n[1] R. Baratto, L. Kim, and J. Nieh. THINC: A Virtual\nDisplay Architecture for Thin-Client Computing. In\nProceedings of the 20th ACM Symposium on Operating\nSystems Principles (SOSP), Oct. 2005.\n[2] Citrix Metaframe. http://www.citrix.com.\n[3] B. C. Cumberland, G. Carius, and A. Muir. Microsoft\nWindows NT Server 4.0, Terminal Server Edition:\nTechnical Reference. Microsoft Press, Redmond, WA, 1999.\n[4] A. Fox, I. Goldberg, S. D. Gribble, and D. C. Lee.\nExperience With Top Gun Wingman: A Proxy-Based\nGraphical Web Browser for the 3Com PalmPilot. In\nProceedings of Middleware \"98, Lake District, England,\nSeptember 1998, 1998.\n[5] GoToMyPC. http://www.gotomypc.com/.\n[6] Health Insurance Portability and Accountability Act.\nhttp://www.hhs.gov/ocr/hipaa/.\n[7] i-Bench version 1.5. http:\n//etestinglabs.com/benchmarks/i-bench/i-bench.asp.\n[8] A. Joshi. On proxy agents, mobility, and web access.\nMobile Networks and Applications, 5(4):233-241, 2000.\n[9] J. Kangasharju, Y. G. Kwon, and A. Ortega. Design and\nImplementation of a Soft Caching Proxy. Computer\nNetworks and ISDN Systems, 30(22-23):2113-2121, 1998.\n[10] A. Lai, J. Nieh, B. Bohra, V. Nandikonda, A. P. Surana,\nand S. Varshneya. Improving Web Browsing on Wireless\nPDAs Using Thin-Client Computing. In Proceedings of the\n13th International World Wide Web Conference (WWW),\nMay 2004.\n[11] A. Maheshwari, A. Sharma, K. Ramamritham, and\nP. Shenoy. TranSquid: Transcoding and caching proxy for\nheterogenous ecommerce environments. In Proceedings of\nthe 12th IEEE Workshop on Research Issues in Data\nEngineering (RIDE \"02), Feb. 2002.\n[12] .NET VNC Viewer for PocketPC.\nhttp://dotnetvnc.sourceforge.net/.\n[13] J. Nieh, S. J. Yang, and N. Novik. Measuring Thin-Client\nPerformance Using Slow-Motion Benchmarking. ACM\nTrans. Computer Systems, 21(1):87-115, Feb. 2003.\n[14] J. Nielsen. Designing Web Usability. New Riders\nPublishing, Indianapolis, IN, 2000.\n[15] Opera Mini Browser.\nhttp://www.opera.com/products/mobile/operamini/.\n[16] T. Richardson, Q. Stafford-Fraser, K. R. Wood, and\nA. Hopper. Virtual Network Computing. IEEE Internet\nComputing, 2(1), Jan./Feb. 1998.\n[17] R. W. Scheifler and J. Gettys. The X Window System.\nACM Trans. Gr., 5(2):79-106, Apr. 1986.\n[18] Sun Secure Global Desktop.\nhttp://www.sun.com/software/products/sgd/.\n[19] S. J. Yang, J. Nieh, S. Krishnappa, A. Mohla, and\nM. Sajjadpour. Web Browsing Performance of Wireless\nThin-Client Computing. In Proceedings of the 12th\nInternational World Wide Web Conference (WWW), May\n2003.\n152", "keywords": "pervasive web;remote display;pda thinclient solution;system usability;web browser;thin-client computing;mobility;full-function web browser;pthinc;seamless mobility;functionality;thin-client;local pda web browser;web application;video playback;screen resolution;web browsing performance;high-fidelity display;mobile wireless pda;crucial browser helper application"} {"name": "train_C-71", "title": "A Point-Distribution Index and Its Application to Sensor-Grouping in Wireless Sensor Networks", "abstract": "We propose \u03b9, a novel index for evaluation of point-distribution. \u03b9 is the minimum distance between each pair of points normalized by the average distance between each pair of points. We find that a set of points that achieve a maximum value of \u03b9 result in a honeycomb structure. We propose that \u03b9 can serve as a good index to evaluate the distribution of the points, which can be employed in coverage-related problems in wireless sensor networks (WSNs). To validate this idea, we formulate a general sensorgrouping problem for WSNs and provide a general sensing model. We show that locally maximizing \u03b9 at sensor nodes is a good approach to solve this problem with an algorithm called Maximizing\u03b9 Node-Deduction (MIND). Simulation results verify that MIND outperforms a greedy algorithm that exploits sensor-redundancy we design. This demonstrates a good application of employing \u03b9 in coverage-related problems for WSNs.", "fulltext": "1. INTRODUCTION\nA wireless sensor network (WSN) consists of a large number of\nin-situ battery-powered sensor nodes. A WSN can collect the data\nabout physical phenomena of interest [1]. There are many\npotential applications of WSNs, including environmental monitoring and\nsurveillance, etc. [1][11].\nIn many application scenarios, WSNs are employed to conduct\nsurveillance tasks in adverse, or even worse, in hostile working\nenvironments. One major problem caused is that sensor nodes are\nsubjected to failures. Therefore, fault tolerance of a WSN is\ncritical.\nOne way to achieve fault tolerance is that a WSN should contain\na large number of redundant nodes in order to tolerate node\nfailures. It is vital to provide a mechanism that redundant nodes can be\nworking in sleeping mode (i.e., major power-consuming units such\nas the transceiver of a redundant sensor node can be shut off) to\nsave energy, and thus to prolong the network lifetime. Redundancy\nshould be exploited as much as possible for the set of sensors that\nare currently taking charge in the surveillance work of the network\narea [6].\nWe find that the minimum distance between each pair of points\nnormalized by the average distance between each pair of points\nserves as a good index to evaluate the distribution of the points. We\ncall this index, denoted by \u03b9, the normalized minimum distance. If\npoints are moveable, we find that maximizing \u03b9 results in a\nhoneycomb structure. The honeycomb structure poses that the coverage\nefficiency is the best if each point represents a sensor node that\nis providing surveillance work. Employing \u03b9 in coverage-related\nproblems is thus deemed promising.\nThis enlightens us that maximizing \u03b9 is a good approach to\nselect a set of sensors that are currently taking charge in the\nsurveillance work of the network area. To explore the effectiveness of\nemploying \u03b9 in coverage-related problems, we formulate a\nsensorgrouping problem for high-redundancy WSNs. An algorithm called\nMaximizing-\u03b9 Node-Deduction (MIND) is proposed in which\nredundant sensor nodes are removed to obtain a large \u03b9 for each set of\nsensors that are currently taking charge in the surveillance work of\nthe network area. We also introduce another greedy solution called\nIncremental Coverage Quality Algorithm (ICQA) for this problem,\nwhich serves as a benchmark to evaluate MIND.\nThe main contribution of this paper is twofold. First, we\nintroduce a novel index \u03b9 for evaluation of point-distribution. We show\nthat maximizing \u03b9 of a WSN results in low redundancy of the\nnetwork. Second, we formulate a general sensor-grouping problem\nfor WSNs and provide a general sensing model. With the MIND\nalgorithm we show that locally maximizing \u03b9 among each sensor\nnode and its neighbors is a good approach to solve this problem.\nThis demonstrates a good application of employing \u03b9 in\ncoveragerelated problems.\nThe rest of the paper is organized as follows. In Section 2, we\nintroduce our point-distribution index \u03b9. We survey related work\nand formulate a sensor-grouping problem together with a general\nsensing model in Section 3. Section 4 investigates the application\nof \u03b9 in this grouping problem. We propose MIND for this problem\n1171\nand introduce ICQA as a benchmark. In Section 5, we present\nour simulation results in which MIND and ICQA are compared.\nSection 6 provides conclusion remarks.\n2. THE NORMALIZED MINIMUM DISTANCE\n\u03b9: A POINT-DISTRIBUTION INDEX\nSuppose there are n points in a Euclidean space \u2126. The\ncoordinates of these points are denoted by xi (i = 1, ..., n).\nIt may be necessary to evaluate how the distribution of these\npoints is. There are many metrics to achieve this goal. For\nexample, the Mean Square Error from these points to their mean value\ncan be employed to calculate how these points deviate from their\nmean (i.e., their central). In resource-sharing evaluation, the Global\nFairness Index (GFI) is often employed to measure how even the\nresource distributes among these points [8], when xi represents the\namount of resource that belong to point i. In WSNs, GFI is usually\nused to calculate how even the remaining energy of sensor nodes\nis.\nWhen n is larger than 2 and the points do not all overlap (That\npoints all overlap means xi = xj, \u2200 i, j = 1, 2, ..., n). We propose\na novel index called the normalized minimum distance, namely \u03b9,\nto evaluate the distribution of the points. \u03b9 is the minimum distance\nbetween each pair of points normalized by the average distance\nbetween each pair of points. It is calculated by:\n\u03b9 =\nmin(||xi \u2212 xj||)\n\u00b5\n(\u2200 i, j = 1, 2, ..., n; and i = j) (1)\nwhere ||xi \u2212 xj|| denotes the Euclidean distance between point\ni and point j in \u2126, the min(\u00b7) function calculates the minimum\ndistance between each pair of points, and \u00b5 is the average distance\nbetween each pair of points, which is:\n\u00b5 =\n(\nPn\ni=1\nPn\nj=1,j=i ||xi \u2212 xj||)\nn(n \u2212 1)\n(2)\n\u03b9 measures how well the points separate from one another.\nObviously, \u03b9 is in interval [0, 1]. \u03b9 is equal to 1 if and only if n is equal\nto 3 and these three points forms an equilateral triangle. \u03b9 is equal\nto zero if any two points overlap. \u03b9 is a very interesting value of a\nset of points. If we consider each xi (\u2200i = 1, ..., n) is a variable in\n\u2126, how these n points would look like if \u03b9 is maximized?\nAn algorithm is implemented to generate the topology in which\n\u03b9 is locally maximized (The algorithm can be found in [19]). We\nconsider a 2-dimensional space. We select n = 10, 20, 30, ..., 100\nand perform this algorithm. In order to avoid that the algorithm\nconverge to local optimum, we select different random seeds to\ngenerate the initial points for 1000 time and obtain the best one\nthat results in the largest \u03b9 when the algorithm converges. Figure 1\ndemonstrates what the resulting topology looks like when n = 20\nas an example.\nSuppose each point represents a sensor node. If the sensor\ncoverage model is the Boolean coverage model [15][17][18][14] and\nthe coverage radius of each node is the same. It is exciting to see\nthat this topology results in lowest redundancy because the Vonoroi\ndiagram [2] formed by these nodes (A Vonoroi diagram formed by\na set of nodes partitions a space into a set of convex polygons such\nthat points inside a polygon are closest to only one particular node)\nis a honeycomb-like structure1\n.\nThis enlightens us that \u03b9 may be employed to solve problems\nrelated to sensor-coverage of an area. In WSNs, it is desirable\n1\nThis is how base stations of a wireless cellular network are\ndeployed and why such a network is called a cellular one.\n0 20 40 60 80 100 120 140 160\n0\n20\n40\n60\n80\n100\n120\n140\n160\nX\nY\nFigure 1: Node Number = 20, \u03b9 = 0.435376\nthat the active sensor nodes that are performing surveillance task\nshould separate from one another. Under the constraint that the\nsensing area should be covered, the more each node separates from\nthe others, the less the redundancy of the coverage is. \u03b9 indicates\nthe quality of such separation. It should be useful for approaches\non sensor-coverage related problems.\nIn our following discussions, we will show the effectiveness of\nemploying \u03b9 in sensor-grouping problem.\n3. THE SENSOR-GROUPING PROBLEM\nIn many application scenarios, to achieve fault tolerance, a WSN\ncontains a large number of redundant nodes in order to tolerate\nnode failures. A node sleeping-working schedule scheme is\ntherefore highly desired to exploit the redundancy of working sensors\nand let as many nodes as possible sleep.\nMuch work in the literature is on this issue [6]. Yan et al\nintroduced a differentiated service in which a sensor node finds out\nits responsible working duration with cooperation of its neighbors\nto ensure the coverage of sampled points [17]. Ye et al developed\nPEAS in which sensor nodes wake up randomly over time, probe\ntheir neighboring nodes, and decide whether they should begin to\ntake charge of surveillance work [18]. Xing et al exploited a\nprobabilistic distributed detection model with a protocol called\nCoordinating Grid (Co-Grid) [16]. Wang et al designed an approach called\nCoverage Configuration Protocol (CCP) which introduced the\nnotion that the coverage degree of intersection-points of the\nneighboring nodes\" sensing-perimeters indicates the coverage of a convex\nregion [15]. In our recent work [7], we also provided a sleeping\nconfiguration protocol, namely SSCP, in which sleeping eligibility\nof a sensor node is determined by a local Voronoi diagram. SSCP\ncan provide different levels of redundancy to maintain different\nrequirements of fault tolerance.\nThe major feature of the aforementioned protocols is that they\nemploy online distributed and localized algorithms in which a\nsensor node determines its sleeping eligibility and/or sleeping time\nbased on the coverage requirement of its sensing area with some\ninformation provided by its neighbors.\nAnother major approach for sensor node sleeping-working\nscheduling issue is to group sensor nodes. Sensor nodes in a network are\ndivided into several disjoint sets. Each set of sensor nodes are able\nto maintain the required area surveillance work. The sensor nodes\nare scheduled according to which set they belong to. These sets\nwork successively. Only one set of sensor nodes work at any time.\nWe call the issue sensor-grouping problem.\nThe major advantage of this approach is that it avoids the\noverhead caused by the processes of coordination of sensor nodes to\nmake decision on whether a sensor node is a candidate to sleep or\n1172\nwork and how long it should sleep or work. Such processes should\nbe performed from time to time during the lifetime of a network in\nmany online distributed and localized algorithms. The large\noverhead caused by such processes is the main drawback of the\nonline distributed and localized algorithms. On the contrary, roughly\nspeaking, this approach groups sensor nodes in one time and\nschedules when each set of sensor nodes should be on duty. It does not\nrequire frequent decision-making on working/sleeping eligibility2\n.\nIn [13] by Slijepcevic et al, the sensing area is divided into\nregions. Sensor nodes are grouped with the most-constrained\nleastconstraining algorithm. It is a greedy algorithm in which the\npriority of selecting a given sensor is determined by how many\nuncovered regions this sensor covers and the redundancy caused by\nthis sensor. In [5] by Cardei et al, disjoint sensor sets are\nmodeled as disjoint dominating sets. Although maximum dominating\nsets computation is NP-complete, the authors proposed a\ngraphcoloring based algorithm. Cardei et al also studied similar problem\nin the domain of covering target points in [4]. The NP-completeness\nof the problem is proved and a heuristic that computes the sets are\nproposed. These algorithms are centralized solutions of\nsensorgrouping problem.\nHowever, global information (e.g., the location of each in-network\nsensor node) of a large scale WSN is also very expensive to\nobtained online. Also it is usually infeasible to obtain such\ninformation before sensor nodes are deployed. For example, sensor nodes\nare usually deployed in a random manner and the location of each\nin-network sensor node is determined only after a node is deployed.\nThe solution of sensor-grouping problem should only base on\nlocally obtainable information of a sensor node. That is to say, nodes\nshould determine which group they should join in a fully\ndistributed way. Here locally obtainable information refers to a node\"s\nlocal information and the information that can be directly obtained\nfrom its adjacent nodes, i.e., nodes within its communication range.\nIn Subsection 3.1, we provide a general problem formulation of\nthe sensor-grouping problem. Distributed-solution requirement is\nformulated in this problem. It is followed by discussion in\nSubsection 3.2 on a general sensing model, which serves as a given\ncondition of the sensor-grouping problem formulation.\nTo facilitate our discussions, the notations in our following\ndiscussions are described as follows.\n\u2022 n: The number in-network sensor nodes.\n\u2022 S(j) (j = 1, 2, ..., m): The jth set of sensor nodes where m\nis the number of sets.\n\u2022 L(i) (i = 1, 2, ..., n): The physical location of node i.\n\u2022 \u03c6: The area monitored by the network: i.e., the sensing area\nof the network.\n\u2022 R: The sensing radius of a sensor node. We assume that\na sensor node can only be responsible to monitor a circular\narea centered at the node with a radius equal to R. This is\na usual assumption in work that addresses sensor-coverage\nrelated problems. We call this circular area the sensing area\nof a node.\n3.1 Problem Formulation\nWe assume that each sensor node can know its approximate\nphysical location. The approximate location information is obtainable\nif each sensor node carries a GPS receiver or if some localization\nalgorithms are employed (e.g., [3]).\n2\nNote that if some nodes die, a re-grouping process might also be\nperformed to exploit the remaining nodes in a set of sensor nodes.\nHow to provide this mechanism is beyond the scope of this paper\nand yet to be explored.\nProblem 1. Given:\n\u2022 The set of each sensor node i\"s sensing neighbors N(i) and\nthe location of each member in N(i);\n\u2022 A sensing model which quantitatively describes how a point\nP in area \u03c6 is covered by sensor nodes that are responsible to\nmonitor this point. We call this quantity the coverage quality\nof P.\n\u2022 The coverage quality requirement in \u03c6, denoted by s. When\nthe coverage of a point is larger than this threshold, we say\nthis point is covered.\nFor each sensor node i, make a decision on which group S(j) it\nshould join so that:\n\u2022 Area \u03c6 can be covered by sensor nodes in each set S(j)\n\u2022 m, the number of sets S(j) is maximized.\nIn this formulation, we call sensor nodes within a circular area\ncentered at a sensor node i with a radius equal to 2 \u00b7 R the sensing\nneighbors of node i. This is because sensors nodes in this area,\ntogether with node i, may be cooperative to ensure the coverage of\na point inside node i\"s sensing area.\nWe assume that the communication range of a sensor node is\nlarger than 2 \u00b7 R, which is also a general assumption in work that\naddresses sensor-coverage related problems. That is to say, the first\ngiven condition in Problem 1 is the information that can be obtained\ndirectly from a node\"s adjacent nodes. It is therefore locally\nobtainable information. The last two given conditions in this problem\nformulation can be programmed into a node before it is deployed\nor by a node-programming protocol (e.g., [9]) during network\nruntime. Therefore, the given conditions can all be easily obtained by\na sensor-grouping scheme with fully distributed implementation.\nWe reify this problem with a realistic sensing model in next\nsubsection.\n3.2 A General Sensing Model\nAs WSNs are usually employed to monitor possible events in a\ngiven area, it is therefore a design requirement that an event\noccurring in the network area must/may be successfully detected by\nsensors.\nThis issue is usually formulated as how to ensure that an event\nsignal omitted in an arbitrary point in the network area can be\ndetected by sensor nodes. Obviously, a sensing model is required to\naddress this problem so that how a point in the network area is\ncovered can be modeled and quantified. Thus the coverage quality of\na WSN can be evaluated.\nDifferent applications of WSNs employ different types of\nsensors, which surely have widely different theoretical and physical\ncharacteristics. Therefore, to fulfill different application\nrequirements, different sensing models should be constructed based on the\ncharacteristics of the sensors employed.\nA simple theoretical sensing model is the Boolean sensing model\n[15][18][17][14]. Boolean sensing model assumes that a sensor\nnode can always detect an event occurring in its responsible\nsensing area. But most sensors detect events according to the signal\nstrength sensed. Event signals usually fade in relation to the\nphysical distance between an event and the sensor. The larger the\ndistance, the weaker the event signals that can be sensed by the sensor,\nwhich results in a reduction of the probability that the event can be\nsuccessfully detected by the sensor.\nAs in WSNs, event signals are usually electromagnetic, acoustic,\nor photic signals, they fade exponentially with the increasing of\n1173\ntheir transmit distance. Specifically, the signal strength E(d) of an\nevent that is received by a sensor node satisfies:\nE(d) =\n\u03b1\nd\u03b2\n(3)\nwhere d is the physical distance from the event to the sensor node;\n\u03b1 is related to the signal strength omitted by the event; and \u03b2 is\nsignal fading factor which is typically a positive number larger than\nor equal to 2. Usually, \u03b1 and \u03b2 are considered as constants.\nBased on this notion, to be more reasonable, researchers propose\ncollaborative sensing model to capture application requirements:\nArea coverage can be maintained by a set of collaborative sensor\nnodes: For a point with physical location L, the point is considered\ncovered by the collaboration of i sensors (denoted by k1, ..., ki) if\nand only if the following two equations holds [7][10][12].\n\u2200j = 1, ..., i; L(kj) \u2212 L < R. (4)\nC(L) =\niX\nj=1\n(E( L(kj) \u2212 L ) > s. (5)\nC(L) is regarded as the coverage quality of location L in the\nnetwork area [7][10][12].\nHowever, we notice that defining the sensibility as the sum of the\nsensed signal strength by each collaborative sensor implies a very\nspecial application: Applications must employ the sum of the\nsignal strength to achieve decision-making. To capture generally\nrealistic application requirement, we modify the definition described\nin Equation (5). The model we adopt in this paper is described in\ndetails as follows.\nWe consider the probability P(L, kj ) that an event located at L\ncan be detected by sensor kj is related to the signal strength sensed\nby kj. Formally,\nP(L, kj) = \u03b3E(d) =\n\u03b4\n( L(kj) \u2212 L / + 1)\u03b2\n, (6)\nwhere \u03b3 is a constant and \u03b4 = \u03b3\u03b1 is a constant too. normalizes\nthe distance to a proper scale and the +1 item is to avoid infinite\nvalue of P(L, kj).\nThe probability that an event located at L can be detected by any\ncollaborative sensors that satisfied Equation (4) is:\nP (L) = 1 \u2212\niY\nj=1\n(1 \u2212 P(L, kj )). (7)\nAs the detection probability P (L) reasonably determines how\nan event occurring at location L can be detected by the networks, it\nis a good measure of the coverage quality of location L in a WSN.\nSpecifically, Equation (5) is modified to:\nC(L) = P (L)\n= 1 \u2212\niY\nj=1\n[1 \u2212\n\u03b4\n( L(kj) \u2212 L / + 1)\u03b2\n] > s. (8)\nTo sum it up, we consider a point at location L is covered if\nEquation (4) and (8) hold.\n4. MAXIMIZING-\u03b9 NODE-DEDUCTION\nALGORITHM FOR SENSOR-GROUPING\nPROBLEM\nBefore we process to introduce algorithms to solve the sensor\ngrouping problem, let us define the margin (denoted by \u03b8) of an\narea \u03c6 monitored by the network as the band-like marginal area\nof \u03c6 and all the points on the outer perimeter of \u03b8 is \u03c1 distance\naway from all the points on the inner perimeter of \u03b8. \u03c1 is called the\nmargin length.\nIn a practical network, sensor nodes are usually evenly deployed\nin the network area. Obviously, the number of sensor nodes that\ncan sense an event occurring in the margin of the network is smaller\nthan the number of sensor nodes that can sense an event occurring\nin other area of the network. Based on this consideration, in our\nalgorithm design, we ensure the coverage quality of the network\narea except the margin. The information on \u03c6 and \u03c1 is\nnetworkbased. Each in-network sensor node can be pre-programmed or\non-line informed about \u03c6 and \u03c1, and thus calculate whether a point\nin its sensing area is in the margin or not.\n4.1 Maximizing-\u03b9 Node-Deduction Algorithm\nThe node-deduction process of our Maximizing-\u03b9 Node-Deduction\nAlgorithm (MIND) is simple. A node i greedily maximizes \u03b9 of the\nsub-network composed by itself, its ungrouped sensing neighbors,\nand the neighbors that are in the same group of itself. Under the\nconstraint that the coverage quality of its sensing area should be\nensured, node i deletes nodes in this sub-network one by one. The\ncandidate to be pruned satisfies that:\n\u2022 It is an ungrouped node.\n\u2022 The deletion of the node will not result in uncovered-points\ninside the sensing area of i.\nA candidate is deleted if the deletion of the candidate results in\nlargest \u03b9 of the sub-network compared to the deletion of other\ncandidates. This node-deduction process continues until no candidate\ncan be found. Then all the ungrouped sensing neighbors that are\nnot deleted are grouped into the same group of node i. We call the\nsensing neighbors that are in the same group of node i the group\nsensing neighbors of node i. We then call node i a finished node,\nmeaning that it has finished the above procedure and the sensing\narea of the node is covered. Those nodes that have not yet finished\nthis procedure are called unfinished nodes.\nThe above procedure initiates at a random-selected node that is\nnot in the margin. The node is grouped to the first group. It\ncalculates the resulting group sensing neighbors of it based on the above\nprocedure. It informs these group sensing neighbors that they are\nselected in the group. Then it hands over the above procedure to\nan unfinished group sensing neighbors that is the farthest from\nitself. This group sensing neighbor continues this procedure until no\nunfinished neighbor can be found. Then the first group is formed\n(Algorithmic description of this procedure can be found at [19]).\nAfter a group is formed, another random-selected ungrouped\nnode begins to group itself to the second group and initiates the\nabove procedure. In this way, groups are formed one by one. When\na node that involves in this algorithm found out that the coverage\nquality if its sensing area, except what overlaps the network margin,\ncannot be ensured even if all its ungrouped sensing neighbors are\ngrouped into the same group as itself, the algorithm stops. MIND\nis based on locally obtainable information of sensor nodes. It is\na distributed algorithm that serves as an approximate solution of\nProblem 1.\n4.2 Incremental Coverage Quality Algorithm:\nA Benchmark for MIND\nTo evaluate the effectiveness of introducing \u03b9 in the sensor-group\nproblem, another algorithm for sensor-group problem called\nIncremental Coverage Quality Algorithm (ICQA) is designed. Our aim\n1174\nis to evaluate how an idea, i.e., MIND, based on locally maximize\n\u03b9 performs.\nIn ICQA, a node-selecting process is as follows. A node i\ngreedily selects an ungrouped sensing neighbor in the same group as\nitself one by one, and informs the neighbor it is selected in the group.\nThe criterion is:\n\u2022 The selected neighbor is responsible to provide surveillance\nwork for some uncovered parts of node i\"s sensing area. (i.e.,\nthe coverage quality requirement of the parts is not fulfilled\nif this neighbor is not selected.)\n\u2022 The selected neighbor results in highest improvement of the\ncoverage quality of the neighbor\"s sensing area.\nThe improvement of the coverage quality, mathematically, should\nbe the integral of the the improvements of all points inside the\nneighbor\"s sensing area. A numerical approximation is employed\nto calculate this improvement. Details are presented in our\nsimulation study.\nThis node-selecting process continues until the sensing area of\nnode i is entirely covered. In this way, node i\"s group sensing\nneighbors are found. The above procedure is handed over as what\nMIND does and new groups are thus formed one by one. And\nthe condition that ICQA stops is the same as MIND. ICQA is also\nbased on locally obtainable information of sensor nodes. ICQA is\nalso a distributed algorithm that serves as an approximate solution\nof Problem 1.\n5. SIMULATION RESULTS\nTo evaluate the effectiveness of employing \u03b9 in sensor-grouping\nproblem, we build simulation surveillance networks. We employ\nMIND and ICQA to group the in-network sensor nodes. We\ncompare the grouping results with respect to how many groups both\nalgorithms find and how the performance of the resulting groups\nare.\nDetailed settings of the simulation networks are shown in Table\n1. In simulation networks, sensor nodes are randomly deployed in\na uniform manner in the network area.\nTable 1: The settings of the simulation networks\nArea of sensor field 400m*400m\n\u03c1 20m\nR 80m\n\u03b1, \u03b2, \u03b3 and 1.0, 2.0, 1.0 and 100.0\ns 0.6\nFor evaluating the coverage quality of the sensing area of a node,\nwe divide the sensing area of a node into several regions and regard\nthe coverage quality of the central point in each region as a\nrepresentative of the coverage quality of the region. This is a numerical\napproximation. Larger number of such regions results in better\napproximation. As sensor nodes are with low computational\ncapacity, there is a tradeoff between the number of such regions and the\nprecision of the resulting coverage quality of the sensing area of a\nnode. In our simulation study, we set this number 12. For\nevaluating the improvement of coverage quality in ICQA, we sum up all\nthe improvements at each region-center as the total improvement.\n5.1 Number of Groups Formed by MIND and\nICQA\nWe set the total in-network node number to different values and\nlet the networks perform MIND and ICQA. For each n,\nsimulations run with several random seeds to generate different networks.\nResults are averaged. Figure 2 shows the group numbers found in\nnetworks with different n\"s.\n500 1000 1500 2000\n0\n5\n10\n15\n20\n25\n30\n35\n40\n45\n50\nTotal in\u2212network node number\nTotalnumberofgroupsfound\nICQA\nMMNP\nFigure 2: The number of groups found by MIND and ICQA\nWe can see that MIND always outperforms ICQA in terms of\nthe number of groups formed. Obviously, the larger the number of\ngroups can be formed, the more the redundancy of each group is\nexploited. This output shows that an approach like MIND that aim\nto maximize \u03b9 of the resulting topology can exploits redundancy\nwell.\nAs an example, in case that n = 1500, the results of five\nnetworks are listed in Table 2.\nTable 2: The grouping results of five networks with n = 1500\nNet MIND ICQA MIND ICQA\nGroup Number Group Number Average \u03b9 Average \u03b9\n1 34 31 0.145514 0.031702\n2 33 30 0.145036 0.036649\n3 33 31 0.156483 0.033578\n4 32 31 0.152671 0.029030\n5 33 32 0.146560 0.033109\nThe difference between the average \u03b9 of the groups in each\nnetwork shows that groups formed by MIND result in topologies with\nlarger \u03b9\"s. It demonstrates that \u03b9 is good indicator of redundancy in\ndifferent networks.\n5.2 The Performance of the Resulting Groups\nAlthough MIND forms more groups than ICQA does, which\nimplies longer lifetime of the networks, another importance\nconsideration is how these groups formed by MIND and ICQA perform.\nWe let 10000 events randomly occur in the network area except\nthe margin. We compare how many events happen at the locations\nwhere the quality is less than the requirement s = 0.6 when each\nresulting group is conducting surveillance work (We call the\nnumber of such events the failure number of group). Figure 3 shows\nthe average failure numbers of the resulting groups when different\nnode numbers are set.\nWe can see that the groups formed by MIND outperform those\nformed by ICQA because the groups formed by MIND result in\nlower failure numbers. This further demonstrates that MIND is a\ngood approach for sensor-grouping problem.\n1175\n500 1000 1500 2000\n0\n10\n20\n30\n40\n50\n60\nTotal in\u2212network node number\naveragefailurenumbers\nICQA\nMMNP\nFigure 3: The failure numbers of MIND and ICQA\n6. CONCLUSION\nThis paper proposes \u03b9, a novel index for evaluation of\npointdistribution. \u03b9 is the minimum distance between each pair of points\nnormalized by the average distance between each pair of points.\nWe find that a set of points that achieve a maximum value of \u03b9\nresult in a honeycomb structure. We propose that \u03b9 can serve as a\ngood index to evaluate the distribution of the points, which can be\nemployed in coverage-related problems in wireless sensor networks\n(WSNs). We set out to validate this idea by employing \u03b9 to a\nsensorgrouping problem. We formulate a general sensor-grouping\nproblem for WSNs and provide a general sensing model. With an\nalgorithm called Maximizing-\u03b9 Node-Deduction (MIND), we show that\nmaximizing \u03b9 at sensor nodes is a good approach to solve this\nproblem. Simulation results verify that MIND outperforms a greedy\nalgorithm that exploits sensor-redundancy we design in terms of the\nnumber and the performance of the groups formed. This\ndemonstrates a good application of employing \u03b9 in coverage-related\nproblems.\n7. ACKNOWLEDGEMENT\nThe work described in this paper was substantially supported by\ntwo grants, RGC Project No. CUHK4205/04E and UGC Project\nNo. AoE/E-01/99, of the Hong Kong Special Administrative\nRegion, China.\n8. REFERENCES\n[1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci.\nA survey on wireless sensor networks. IEEE\nCommunications Magazine, 40(8):102-114, 2002.\n[2] F. Aurenhammer. Vononoi diagram - a survey of a\nfundamental geometric data structure. ACM Computing\nSurveys, 23(2):345-405, September 1991.\n[3] N. Bulusu, J. Heidemann, and D. Estrin. GPS-less low-cost\noutdoor localization for very small devices. IEEE Personal\nCommunication, October 2000.\n[4] M. Cardei and D.-Z. Du. Improving wireless sensor network\nlifetime through power aware organization. ACM Wireless\nNetworks, 11(3), May 2005.\n[5] M. Cardei, D. MacCallum, X. Cheng, M. Min, X. Jia, D. Li,\nand D.-Z. Du. Wireless sensor networks with energy efficient\norganization. Journal of Interconnection Networks, 3(3-4),\nDecember 2002.\n[6] M. Cardei and J. Wu. Coverage in wireless sensor networks.\nIn Handbook of Sensor Networks, (eds. M. Ilyas and I.\nMagboub), CRC Press, 2004.\n[7] X. Chen and M. R. Lyu. A sensibility-based sleeping\nconfiguration protocol for dependable wireless sensor\nnetworks. CSE Technical Report, The Chinese University of\nHong Kong, 2005.\n[8] R. Jain, W. Hawe, and D. Chiu. A quantitative measure of\nfairness and discrimination for resource allocation in shared\ncomputer systems. Technical Report DEC-TR-301,\nSeptember 1984.\n[9] S. S. Kulkarni and L. Wang. MNP: Multihop network\nreprogramming service for sensor networks. In Proc. of the\n25th International Conference on Distributed Computing\nSystems (ICDCS), June 2005.\n[10] B. Liu and D. Towsley. A study on the coverage of\nlarge-scale sensor networks. In Proc. of the 1st IEEE\nInternational Conference on Mobile ad-hoc and Sensor\nSystems, Fort Lauderdale, FL, October 2004.\n[11] A. Mainwaring, J. Polastre, R. Szewczyk, D. Culler, and\nJ. Anderson. Wireless sensor networks for habitat\nmonitoring. In Proc. of the ACM International Workshop on\nWireless Sensor Networks and Applications, 2002.\n[12] S. Megerian, F. Koushanfar, G. Qu, G. Veltri, and\nM. Potkonjak. Explosure in wirless sensor networks: Theory\nand pratical solutions. Wireless Networks, 8, 2002.\n[13] S. Slijepcevic and M. Potkonjak. Power efficient\norganization of wireless sensor networks. In Proc. of the\nIEEE International Conference on Communications (ICC),\nvolume 2, Helsinki, Finland, June 2001.\n[14] D. Tian and N. D. Georganas. A node scheduling scheme for\nenergy conservation in large wireless sensor networks.\nWireless Communications and Mobile Computing,\n3:272-290, May 2003.\n[15] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C. Gill.\nIntegrated coverage and connectivity configuration in\nwireless sensor networks. In Proc. of the 1st ACM\nInternational Conference on Embedded Networked Sensor\nSystems (SenSys), Los Angeles, CA, November 2003.\n[16] G. Xing, C. Lu, R. Pless, and J. A. O\u00b4 Sullivan. Co-Grid: an\nefficient converage maintenance protocol for distributed\nsensor networks. In Proc. of the 3rd International\nSymposium on Information Processing in Sensor Networks\n(IPSN), Berkeley, CA, April 2004.\n[17] T. Yan, T. He, and J. A. Stankovic. Differentiated\nsurveillance for sensor networks. In Proc. of the 1st ACM\nInternational Conference on Embedded Networked Sensor\nSystems (SenSys), Los Angeles, CA, November 2003.\n[18] F. Ye, G. Zhong, J. Cheng, S. Lu, and L. Zhang. PEAS: A\nrobust energy conserving protocol for long-lived sensor\nnetworks. In Proc. of the 23rd International Conference on\nDistributed Computing Systems (ICDCS), Providence, Rhode\nIsland, May 2003.\n[19] Y. Zhou, H. Yang, and M. R. Lyu. A point-distribution index\nand its application in coverage-related problems. CSE\nTechnical Report, The Chinese University of Hong Kong,\n2006.\n1176", "keywords": "sensor-grouping;sensor group;fault tolerance;sleeping configuration protocol;surveillance;redundancy;sensor coverage;incremental coverage quality algorithm;node-deduction process;point-distribution index;wireless sensor network;honeycomb structure"} {"name": "train_C-72", "title": "GUESS: Gossiping Updates for Efficient Spectrum Sensing", "abstract": "Wireless radios of the future will likely be frequency-agile, that is, supporting opportunistic and adaptive use of the RF spectrum. Such radios must coordinate with each other to build an accurate and consistent map of spectral utilization in their surroundings. We focus on the problem of sharing RF spectrum data among a collection of wireless devices. The inherent requirements of such data and the time-granularity at which it must be collected makes this problem both interesting and technically challenging. We propose GUESS, a novel incremental gossiping approach to coordinated spectral sensing. It (1) reduces protocol overhead by limiting the amount of information exchanged between participating nodes, (2) is resilient to network alterations, due to node movement or node failures, and (3) allows exponentially-fast information convergence. We outline an initial solution incorporating these ideas and also show how our approach reduces network overhead by up to a factor of 2.4 and results in up to 2.7 times faster information convergence than alternative approaches.", "fulltext": "1. INTRODUCTION\nThere has recently been a huge surge in the growth of\nwireless technology, driven primarily by the availability of\nunlicensed spectrum. However, this has come at the cost\nof increased RF interference, which has caused the Federal\nCommunications Commission (FCC) in the United States to\nre-evaluate its strategy on spectrum allocation. Currently,\nthe FCC has licensed RF spectrum to a variety of public and\nprivate institutions, termed primary users. New spectrum\nallocation regimes implemented by the FCC use dynamic\nspectrum access schemes to either negotiate or\nopportunistically allocate RF spectrum to unlicensed secondary users\nPermission to make digital or hard copies of all or part of this work for\npersonal or classroom use is granted without fee provided that copies are\nnot made or distributed for profit or commercial advantage and that copies\nbear this notice and the full citation on the first page. To copy otherwise, to\nrepublish, to post on servers or to redistribute to lists, requires prior specific\nD1\nD2\nD5\nD3\nD4\nPrimary User\nShadowed\nSecondary Users\nSecondary Users detect\nPrimary's Signal\nShadowed\nSecondary User\nFigure 1: Without cooperation, shadowed users are not\nable to detect the presence of the primary user.\nthat can use it when the primary user is absent. The second\ntype of allocation scheme is termed opportunistic spectrum\nsharing. The FCC has already legislated this access method\nfor the 5 GHz band and is also considering the same for\nTV broadcast bands [1]. As a result, a new wave of\nintelligent radios, termed cognitive radios (or software defined\nradios), is emerging that can dynamically re-tune their\nradio parameters based on interactions with their surrounding\nenvironment.\nUnder the new opportunistic allocation strategy,\nsecondary users are obligated not to interfere with primary\nusers (senders or receivers). This can be done by sensing\nthe environment to detect the presence of primary users.\nHowever, local sensing is not always adequate, especially in\ncases where a secondary user is shadowed from a primary\nuser, as illustrated in Figure 1. Here, coordination between\nsecondary users is the only way for shadowed users to\ndetect the primary. In general, cooperation improves sensing\naccuracy by an order of magnitude when compared to not\ncooperating at all [5].\nTo realize this vision of dynamic spectrum access, two\nfundamental problems must be solved: (1) Efficient and\ncoordinated spectrum sensing and (2) Distributed spectrum\nallocation. In this paper, we propose strategies for coordinated\nspectrum sensing that are low cost, operate on timescales\ncomparable to the agility of the RF environment, and are\nresilient to network failures and alterations. We defer the\nproblem of spectrum allocation to future work.\nSpectrum sensing techniques for cognitive radio networks\n[4, 17] are broadly classified into three regimes; (1)\ncentralized coordinated techniques, (2) decentralized coordinated\ntechniques, and (3) decentralized uncoordinated techniques.\nWe advocate a decentralized coordinated approach, similar\nin spirit to OSPF link-state routing used in the Internet.\nThis is more effective than uncoordinated approaches\nbecause making decisions based only on local information is\nfallible (as shown in Figure 1). Moreover, compared to\ncen12\ntralized approaches, decentralized techniques are more\nscalable, robust, and resistant to network failures and security\nattacks (e.g. jamming).\nCoordinating sensory data between cognitive radio devices\nis technically challenging because accurately assessing\nspectrum usage requires exchanging potentially large amounts of\ndata with many radios at very short time scales. Data size\ngrows rapidly due to the large number (i.e. thousands) of\nspectrum bands that must be scanned. This data must also\nbe exchanged between potentially hundreds of neighboring\nsecondary users at short time scales, to account for rapid\nchanges in the RF environment.\nThis paper presents GUESS, a novel approach to\ncoordinated spectrum sensing for cognitive radio networks. Our\napproach is motivated by the following key observations:\n1. Low-cost sensors collect approximate data: Most\ndevices have limited sensing resolution because they are\nlow-cost and low duty-cycle devices and thus cannot\nperform complex RF signal processing (e.g. matched\nfiltering). Many are typically equipped with simple\nenergy detectors that gather only approximate\ninformation.\n2. Approximate summaries are sufficient for coordination:\nApproximate statistical summaries of sensed data are\nsufficient for correlating sensed information between\nradios, as relative usage information is more\nimportant than absolute usage data. Thus, exchanging\nexact RF information may not be necessary, and more\nimportantly, too costly for the purposes of spectrum\nsensing.\n3. RF spectrum changes incrementally: On most bands,\nRF spectrum utilization changes infrequently.\nMoreover, utilization of a specific RF band affects only that\nband and not the entire spectrum. Therefore, if the\nusage pattern of a particular band changes\nsubstantially, nodes detecting that change can initiate an\nupdate protocol to update the information for that band\nalone, leaving in place information already collected\nfor other bands. This allows rapid detection of change\nwhile saving the overhead of exchanging unnecessary\ninformation.\nBased on these observations, GUESS makes the following\ncontributions:\n1. A novel approach that applies randomized gossiping\nalgorithms to the problem of coordinated spectrum\nsensing. These algorithms are well suited to coordinated\nspectrum sensing due to the unique characteristics of\nthe problem: i.e. radios are power-limited, mobile and\nhave limited bandwidth to support spectrum sensing\ncapabilities.\n2. An application of in-network aggregation for\ndissemination of spectrum summaries. We argue that\napproximate summaries are adequate for performing accurate\nradio parameter tuning.\n3. An extension of in-network aggregation and\nrandomized gossiping to support incremental maintenance of\nspectrum summaries. Compared to standard\ngossiping approaches, incremental techniques can further\nreduce overhead and protocol execution time by\nrequiring fewer radio resources.\nThe rest of the paper is organized as follows. Section 2\nmotivates the need for a low cost and efficient approach to\ncoordinated spectrum sensing. Section 3 discusses related\nwork in the area, while Section 4 provides a background on\nin-network aggregation and randomized gossiping. Sections\n5 and 6 discuss extensions and protocol details of these\ntechniques for coordinated spectrum sensing. Section 7 presents\nsimulation results showcasing the benefits of GUESS, and\nSection 8 presents a discussion and some directions for\nfuture work.\n2. MOTIVATION\nTo estimate the scale of the problem, In-stat predicts that\nthe number of WiFi-enabled devices sold annually alone will\ngrow to 430 million by 2009 [2]. Therefore, it would be\nreasonable to assume that a typical dense urban environment\nwill contain several thousand cognitive radio devices in range\nof each other. As a result, distributed spectrum sensing and\nallocation would become both important and fundamental.\nCoordinated sensing among secondary radios is essential\ndue to limited device sensing resolution and physical RF\neffects such as shadowing. Cabric et al. [5] illustrate the gains\nfrom cooperation and show an order of magnitude reduction\nin the probability of interference with the primary user when\nonly a small fraction of secondary users cooperate.\nHowever, such coordination is non-trivial due to: (1) the\nlimited bandwidth available for coordination, (2) the need to\ncommunicate this information on short timescales, and (3)\nthe large amount of sensory data that needs to be exchanged.\nLimited Bandwidth: Due to restrictions of cost and\npower, most devices will likely not have dedicated hardware\nfor supporting coordination. This implies that both data\nand sensory traffic will need to be time-multiplexed onto a\nsingle radio interface. Therefore, any time spent\ncommunicating sensory information takes away from the device\"s\nability to perform its intended function. Thus, any such\ncoordination must incur minimal network overhead.\nShort Timescales: Further compounding the problem\nis the need to immediately propagate updated RF sensory\ndata, in order to allow devices to react to it in a timely\nfashion. This is especially true due to mobility, as rapid changes\nof the RF environment can occur due to device and obstacle\nmovements. Here, fading and multi-path interference\nheavily impact sensing abilities. Signal level can drop to a deep\nnull with just a \u03bb/4 movement in receiver position (3.7 cm\nat 2 GHz), where \u03bb is the wavelength [14]. Coordination\nwhich does not support rapid dissemination of information\nwill not be able to account for such RF variations.\nLarge Sensory Data: Because cognitive radios can\npotentially use any part of the RF spectrum, there will be\nnumerous channels that they need to scan. Suppose we wish to\ncompute the average signal energy in each of 100 discretized\nfrequency bands, and each signal can have up to 128 discrete\nenergy levels. Exchanging complete sensory information\nbetween nodes would require 700 bits per transmission (for\n100 channels, each requiring seven bits of information).\nExchanging this information among even a small group of 50\ndevices each second would require (50 time-steps \u00d7 50\ndevices \u00d7 700 bits per transmission) = 1.67 Mbps of aggregate\nnetwork bandwidth.\nContrast this to the use of a randomized gossip protocol to\ndisseminate such information, and the use of FM bit vectors\nto perform in-network aggregation. By applying gossip and\nFM aggregation, aggregate bandwidth requirements drop to\n(c\u00b7logN time-steps \u00d7 50 devices \u00d7 700 bits per transmission)\n= 0.40 Mbps, since 12 time-steps are needed to propagate\nthe data (with c = 2, for illustrative purpoes1\n). This is\nexplained further in Section 4.\nBased on these insights, we propose GUESS, a low-overhead\napproach which uses incremental extensions to FM\naggregation and randomized gossiping for efficient coordination\nwithin a cognitive radio network. As we show in Section 7,\n1\nConvergence time is correlated with the connectivity topology\nof the devices, which in turn depends on the environment.\n13\nX\nA\nA\nX\nB\nB\nX\nFigure 2: Using FM aggregation to compute average signal level measured by a group of devices.\nthese incremental extensions can further reduce bandwidth\nrequirements by up to a factor of 2.4 over the standard\napproaches discussed above.\n3. RELATED WORK\nResearch in cognitive radio has increased rapidly [4, 17]\nover the years, and it is being projected as one of the leading\nenabling technologies for wireless networks of the future [9].\nAs mentioned earlier, the FCC has already identified new\nregimes for spectrum sharing between primary users and\nsecondary users and a variety of systems have been proposed\nin the literature to support such sharing [4, 17].\nDetecting the presence of a primary user is non-trivial,\nespecially a legacy primary user that is not cognitive\nradio aware. Secondary users must be able to detect the\nprimary even if they cannot properly decode its signals. This\nhas been shown by Sahai et al. [16] to be extremely\ndifficult even if the modulation scheme is known. Sophisticated\nand costly hardware, beyond a simple energy detector, is\nrequired to improve signal detection accuracy [16]. Moreover,\na shadowed secondary user may not even be able to detect\nsignals from the primary. As a result, simple local\nsensing approaches have not gained much momentum. This has\nmotivated the need for cooperation among cognitive radios\n[16].\nMore recently, some researchers have proposed approaches\nfor radio coordination. Liu et al. [11] consider a centralized\naccess point (or base station) architecture in which\nsensing information is forwarded to APs for spectrum allocation\npurposes. APs direct mobile clients to collect such\nsensing information on their behalf. However, due to the need\nof a fixed AP infrastructure, such a centralized approach is\nclearly not scalable.\nIn other work, Zhao et al. [17] propose a distributed\ncoordination approach for spectrum sensing and allocation.\nCognitive radios organize into clusters and coordination\noccurs within clusters. The CORVUS [4] architecture proposes\na similar clustering method that can use either a centralized\nor decentralized approach to manage clusters. Although an\nimprovement over purely centralized approaches, these\ntechniques still require a setup phase to generate the clusters,\nwhich not only adds additional delay, but also requires many\nof the secondary users to be static or quasi-static. In\ncontrast, GUESS does not place such restrictions on secondary\nusers, and can even function in highly mobile environments.\n4. BACKGROUND\nThis section provides the background for our approach.\nWe present the FM aggregation scheme that we use to\ngenerate spectrum summaries and perform in-network\naggregation. We also discuss randomized gossiping techniques for\ndisseminating aggregates in a cognitive radio network.\n4.1 FM Aggregation\nAggregation is the process where nodes in a distributed\nnetwork combine data received from neighboring nodes with\ntheir local value to generate a combined aggregate. This\naggregate is then communicated to other nodes in the\nnetwork and this process repeats until the aggregate at all\nnodes has converged to the same value, i.e. the global\naggregate. Double-counting is a well known problem in this\nprocess, where nodes may contribute more than once to the\naggregate, causing inaccuracy in the final result. Intuitively,\nnodes can tag the aggregate value they transmit with\ninformation about which nodes have contributed to it. However,\nthis approach is not scalable. Order and Duplicate\nInsensitive (ODI) techniques have been proposed in the literature\n[10, 15]. We adopt the ODI approach pioneered by Flajolet\nand Martin (FM) for the purposes of aggregation. Next we\noutline the FM approach; for full details, see [7].\nSuppose we want to compute the number of nodes in the\nnetwork, i.e. the COUNT query. To do so, each node\nperforms a coin toss experiment as follows: toss an unbiased\ncoin, stopping after the first head is seen. The node then\nsets the ith bit in a bit vector (initially filled with zeros),\nwhere i is the number of coin tosses it performed. The\nintuition is that as the number of nodes doing coin toss\nexperiments increases, the probability of a more significant bit\nbeing set in one of the nodes\" bit vectors increases.\nThese bit vectors are then exchanged among nodes. When\na node receives a bit vector, it updates its local bit vector\nby bitwise OR-ing it with the received vector (as shown in\nFigure 2 which computes AVERAGE). At the end of the\naggregation process, every node, with high probability, has\nthe same bit vector. The actual value of the count aggregate\nis then computed using the following formula, AGGF M =\n2j\u22121\n/0.77351, where j represents the bit position of the least\nsignificant zero in the aggregate bit vector [7].\nAlthough such aggregates are very compact in nature,\nrequiring only O(logN) state space (where N is the number\nof nodes), they may not be very accurate as they can only\napproximate values to the closest power of 2, potentially\ncausing errors of up to 50%. More accurate aggregates can\nbe computed by maintaining multiple bit vectors at each\nnode, as explained in [7]. This decreases the error to within\nO(1/\n\u221a\nm), where m is the number of such bit vectors.\nQueries other than count can also be computed using\nvariants of this basic counting algorithm, as discussed in [3] (and\nshown in Figure 2). Transmitting FM bit vectors between\nnodes is done using randomized gossiping, discussed next.\n4.2 Gossip Protocols\nGossip-based protocols operate in discrete time-steps; a\ntime-step is the required amount of time for all\ntransmissions in that time-step to complete. At every time-step, each\nnode having something to send randomly selects one or more\nneighboring nodes and transmits its data to them. The\nrandomized propagation of information provides fault-tolerance\nand resilience to network failures and outages. We\nemphasize that this characteristic of the protocol also allows it to\noperate without relying on any underlying network\nstructure. Gossip protocols have been shown to provide\nexponentially fast convergence2\n, on the order of O(log N)\n[10], where N is the number of nodes (or radios). These\nprotocols can therefore easily scale to very dense\nenvironments.\n2\nConvergence refers to the state in which all nodes have the most\nup-to-date view of the network.\n14\nTwo types of gossip protocols are:\n\u2022 Uniform Gossip: In uniform gossip, at each\ntimestep, each node chooses a random neighbor and sends\nits data to it. This process repeats for O(log(N)) steps\n(where N is the number of nodes in the network).\nUniform gossip provides exponentially fast convergence,\nwith low network overhead [10].\n\u2022 Random Walk: In random walk, only a subset of\nthe nodes (termed designated nodes) communicate in a\nparticular time-step. At startup, k nodes are randomly\nelected as designated nodes. In each time-step, each\ndesignated node sends its data to a random neighbor,\nwhich becomes designated for the subsequent\ntimestep (much like passing a token). This process repeats\nuntil the aggregate has converged in the network.\nRandom walk has been shown to provide similar\nconvergence bounds as uniform gossip in problems of similar\ncontext [8, 12].\n5. INCREMENTAL PROTOCOLS\n5.1 Incremental FM Aggregates\nOne limitation of FM aggregation is that it does not\nsupport updates. Due to the probabilistic nature of FM, once\nbit vectors have been ORed together, information cannot\nsimply be removed from them as each node\"s contribution\nhas not been recorded. We propose the use of delete vectors,\nan extension of FM to support updates. We maintain a\nseparate aggregate delete vector whose value is subtracted from\nthe original aggregate vector\"s value to obtain the resulting\nvalue as follows.\nAGGINC = (2a\u22121\n/0.77351) \u2212 (2b\u22121\n/0.77351) (1)\nHere, a and b represent the bit positions of the least\nsignificant zero in the original and delete bit vectors respectively.\nSuppose we wish to compute the average signal level\ndetected in a particular frequency. To compute this, we\ncompute the SUM of all signal level measurements and divide\nthat by the COUNT of the number of measurements. A\nSUM aggregate is computed similar to COUNT (explained\nin Section 4.1), except that each node performs s coin toss\nexperiments, where s is the locally measured signal level.\nFigure 2 illustrates the sequence by which the average signal\nenergy is computed in a particular band using FM\naggregation.\nNow suppose that the measured signal at a node changes\nfrom s to s . The vectors are updated as follows.\n\u2022 s > s: We simply perform (s \u2212 s) more coin toss\nexperiments and bitwise OR the result with the original\nbit vector.\n\u2022 s < s: We increase the value of the delete vector by\nperforming (s \u2212 s ) coin toss experiments and bitwise\nOR the result with the current delete vector.\nUsing delete vectors, we can now support updates to the\nmeasured signal level. With the original implementation of\nFM, the aggregate would need to be discarded and a new one\nrecomputed every time an update occurred. Thus, delete\nvectors provide a low overhead alternative for applications\nwhose data changes incrementally, such as signal level\nmeasurements in a coordinated spectrum sensing environment.\nNext we discuss how these aggregates can be communicated\nbetween devices using incremental routing protocols.\n5.2 Incremental Routing Protocol\nWe use the following incremental variants of the routing\nprotocols presented in Section 4.2 to support incremental\nupdates to previously computed aggregates.\nUpdate Received OR\nLocal Update Occurs\nRecovered\nSusceptible\nTime-stamp Expires\nInitial State\nAdditional\nUpdate\nReceived\nInfectious\nClean Up\nFigure 3: State diagram each device passes through as\nupdates proceed in the system\n\u2022 Incremental Gossip Protocol (IGP): When an\nupdate occurs, the updated node initiates the gossiping\nprocedure. Other nodes only begin gossiping once they\nreceive the update. Therefore, nodes receiving the\nupdate become active and continue communicating with\ntheir neighbors until the update protocol terminates,\nafter O(log(N)) time steps.\n\u2022 Incremental Random Walk Protocol (IRWP):\nWhen an update (or updates) occur in the system,\ninstead of starting random walks at k random nodes in\nthe network, all k random walks are initiated from the\nupdated node(s). The rest of the protocol proceeds in\nthe same fashion as the standard random walk\nprotocol. The allocation of walks to updates is discussed\nin more detail in [3], where the authors show that the\nnumber of walks has an almost negligible impact on\nnetwork overhead.\n6. PROTOCOL DETAILS\nUsing incremental routing protocols to disseminate\nincremental FM aggregates is a natural fit for the problem of\ncoordinated spectrum sensing. Here we outline the\nimplementation of such techniques for a cognitive radio network.\nWe continue with the example from Section 5.1, where we\nwish to perform coordination between a group of wireless\ndevices to compute the average signal level in a particular\nfrequency band.\nUsing either incremental random walk or incremental\ngossip, each device proceeds through three phases, in order to\ndetermine the global average signal level for a particular\nfrequency band. Figure 3 shows a state diagram of these\nphases.\nSusceptible: Each device starts in the susceptible state\nand becomes infectious only when its locally measured signal\nlevel changes, or if it receives an update message from a\nneighboring device. If a local change is observed, the device\nupdates either the original or delete bit vector, as described\nin Section 5.1, and moves into the infectious state. If it\nreceives an update message, it ORs the received original\nand delete bit vectors with its local bit vectors and moves\ninto the infectious state.\nNote, because signal level measurements may change\nsporadically over time, a smoothing function, such as an\nexponentially weighted moving average, should be applied to\nthese measurements.\nInfectious: Once a device is infectious it continues to\nsend its up-to-date bit vectors, using either incremental\nrandom walk or incremental gossip, to neighboring nodes. Due\nto FM\"s order and duplicate insensitive (ODI) properties,\nsimultaneously occurring updates are handled seamlessly by\nthe protocol.\nUpdate messages contain a time stamp indicating when\nthe update was generated, and each device maintains a\nlo15\n0\n200\n400\n600\n800\n1000\n1 10 100\nNumber of Measured Signal Changes\nExecutiontime(ms)\nIncremental Gossip Uniform Gossip\n(a) Incremental Gossip and Uniform\nGossip on Clique\n0\n200\n400\n600\n800\n1000\n1 10 100\nNumber of Measured Signal Changes\nExecutionTime(ms).\nIncremental Random Walk Random Walk\n(b) Incremental Random Walk and\nRandom Walk on Clique\n0\n400\n800\n1200\n1600\n2000\n1 10 100\nNumber of Measured Signal Changes\nExecutionTime(ms).\nRandom Walk Incremental Random Walk\n(c) Incremental Random Walk and\nRandom Walk on Power-Law Random Graph\nFigure 4: Execution times of Incremental Protocols\n0.9\n1.4\n1.9\n2.4\n2.9\n1 10 100\nNumber of Measured Signal Changes\nOverheadImprovementRatio.\n(NormalizedtoUniformGossip)\nIncremental Gossip Uniform Gossip\n(a) Incremental Gossip and Uniform\nGossip on Clique\n0.9\n1.4\n1.9\n2.4\n2.9\n1 10 100\nNumber of Measured Signal Changes\nOverheadImprovementRatio.\n(NormalizedtoRandomWalk) Incremental Random Walk Random Walk\n(b) Incremental Random Walk and\nRandom Walk on Clique\n0.9\n1.1\n1.3\n1.5\n1.7\n1.9\n1 10 100\nNumber of Measured Signal Changes\nOverheadImprovementRatio.\n(NormalizedtoRandomWalk)\nRandom Walk Incremental Random Walk\n(c) Incremental Random Walk and\nRandom Walk on Power-Law Random Graph\nFigure 5: Network overhead of Incremental Protocols\ncal time stamp of when it received the most recent update.\nUsing this information, a device moves into the recovered\nstate once enough time has passed for the most recent\nupdate to have converged. As discussed in Section 4.2, this\nhappens after O(log(N)) time steps.\nRecovered: A recovered device ceases to propagate any\nupdate information. At this point, it performs clean-up and\nprepares for the next infection by entering the susceptible\nstate. Once all devices have entered the recovered state, the\nsystem will have converged, and with high probability, all\ndevices will have the up-to-date average signal level. Due\nto the cumulative nature of FM, even if all devices have not\nconverged, the next update will include all previous updates.\nNevertheless, the probability that gossip fails to converge is\nsmall, and has been shown to be O(1/N) [10].\nFor coordinated spectrum sensing, non-incremental\nrouting protocols can be implemented in a similar fashion.\nRandom walk would operate by having devices periodically drop\nthe aggregate and re-run the protocol. Each device would\nperform a coin toss (biased on the number of walks) to\ndetermine whether or not it is a designated node. This is\ndifferent from the protocol discussed above where only\nupdated nodes initiate random walks. Similar techniques can\nbe used to implement standard gossip.\n7. EVALUATION\nWe now provide a preliminary evaluation of GUESS in\nsimulation. A more detailed evaluation of this approach can\nbe found in [3]. Here we focus on how incremental\nextensions to gossip protocols can lead to further improvements\nover standard gossiping techniques, for the problem of\ncoordinated spectrum sensing.\nSimulation Setup: We implemented a custom\nsimulator in C++. We study the improvements of our\nincremental gossip protocols over standard gossiping in two\ndimensions: execution time and network overhead. We use two\ntopologies to represent device connectivity: a clique, to\neliminate the effects of the underlying topology on protocol\nperformance, and a BRITE-generated [13] power-law random\ngraph (PLRG), to illustrate how our results extend to more\nrealistic scenarios. We simulate a large deployment of 1,000\ndevices to analyze protocol scalability.\nIn our simulations, we compute the average signal level in\na particular band by disseminating FM bit vectors. In each\nrun of the simulation, we induce a change in the measured\nsignal at one or more devices. A run ends when the new\naverage signal level has converged in the network.\nFor each data point, we ran 100 simulations and 95%\nconfidence intervals (error bars) are shown.\nSimulation Parameters: Each transmission involves\nsending 70 bits of information to a neighboring node. To\ncompute the AVERAGE aggregate, four bit vectors need to\nbe transmitted: the original SUM vector, the SUM delete\nvector, the original COUNT vector, and the COUNT delete\nvector. Non-incremental protocols do not transmit the delete\nvectors. Each transmission also includes a time stamp of\nwhen the update was generated.\nWe assume nodes communicate on a common control\nchannel at 2 Mbps. Therefore, one time-step of protocol\nexecution corresponds to the time required for 1,000 nodes to\nsequentially send 70 bits at 2 Mbps. Sequential use of the\ncontrol channel is a worst case for our protocols; in practice,\nmultiple control channels could be used in parallel to reduce\nexecution time. We also assume nodes are loosely time\nsynchronized, the implications of which are discussed further in\n[3]. Finally, in order to isolate the effect of protocol\noperation on performance, we do not model the complexities of\nthe wireless channel in our simulations.\nIncremental Protocols Reduce Execution Time:\nFigure 4(a) compares the performance of incremental gossip\n(IGP) with uniform gossip on a clique topology. We observe\nthat both protocols have almost identical execution times.\nThis is expected as IGP operates in a similar fashion to\n16\nuniform gossip, taking O(log(N)) time-steps to converge.\nFigure 4(b) compares the execution times of\nincremental random walk (IRWP) and standard random walk on a\nclique. IRWP reduces execution time by a factor of 2.7 for a\nsmall number of measured signal changes. Although random\nwalk and IRWP both use k random walks (in our simulations\nk = number of nodes), IRWP initiates walks only from\nupdated nodes (as explained in Section 5.2), resulting in faster\ninformation convergence. These improvements carry over to\na PLRG topology as well (as shown in Figure 4(c)), where\nIRWP is 1.33 times faster than random walk.\nIncremental Protocols Reduce Network Overhead:\nFigure 5(a) shows the ratio of data transmitted using\nuniform gossip relative to incremental gossip on a clique. For\na small number of signal changes, incremental gossip incurs\n2.4 times less overhead than uniform gossip. This is because\nin the early steps of protocol execution, only devices which\ndetect signal changes communicate. As more signal changes\nare introduced into the system, gossip and incremental\ngossip incur approximately the same overhead.\nSimilarly, incremental random walk (IRWP) incurs much\nless overhead than standard random walk. Figure 5(b) shows\na 2.7 fold reduction in overhead for small numbers of\nsignal changes on a clique. Although each protocol uses the\nsame number of random walks, IRWP uses fewer network\nresources than random walk because it takes less time to\nconverge. This improvement also holds true on more\ncomplex PLRG topologies (as shown in Figure 5(c)), where we\nobserve a 33% reduction in network overhead.\nFrom these results it is clear that incremental techniques\nyield significant improvements over standard approaches to\ngossip, even on complex topologies. Because spectrum\nutilization is characterized by incremental changes to usage,\nincremental protocols are ideally suited to solve this\nproblem in an efficient and cost effective manner.\n8. DISCUSSION AND FUTURE WORK\nWe have only just scratched the surface in addressing the\nproblem of coordinated spectrum sensing using incremental\ngossiping. Next, we outline some open areas of research.\nSpatial Decay: Devices performing coordinated sensing\nare primarily interested in the spectrum usage of their local\nneighborhood. Therefore, we recommend the use of\nspatially decaying aggregates [6], which limits the impact of an\nupdate on more distant nodes. Spatially decaying\naggregates work by successively reducing (by means of a decay\nfunction) the value of the update as it propagates further\nfrom its origin. One challenge with this approach is that\npropagation distance cannot be determined ahead of time\nand more importantly, exhibits spatio-temporal variations.\nTherefore, finding the optimal decay function is non-trivial,\nand an interesting subject of future work.\nSignificance Threshold: RF spectrum bands\ncontinually experience small-scale changes which may not\nnecessarily be significant. Deciding if a change is significant can be\ndone using a significance threshold \u03b2, below which any\nobserved change is not propagated by the node. Choosing an\nappropriate operating value for \u03b2 is application dependent,\nand explored further in [3].\nWeighted Readings: Although we argued that most\ndevices will likely be equipped with low-cost sensing\nequipment, there may be situations where there are some special\ninfrastructure nodes that have better sensing abilities than\nothers. Weighting their measurements more heavily could\nbe used to maintain a higher degree of accuracy.\nDetermining how to assign such weights is an open area of research.\nImplementation Specifics: Finally, implementing\ngossip for coordinated spectrum sensing is also open. If\nimplemented at the MAC layer, it may be feasible to piggy-back\ngossip messages over existing management frames (e.g.\nnetworking advertisement messages). As well, we also require\nthe use of a control channel to disseminate sensing\ninformation. There are a variety of alternatives for\nimplementing such a channel, some of which are outlined in [4]. The\ntrade-offs of different approaches to implementing GUESS\nis a subject of future work.\n9. CONCLUSION\nSpectrum sensing is a key requirement for dynamic\nspectrum allocation in cognitive radio networks. The nature of\nthe RF environment necessitates coordination between\ncognitive radio devices. We propose GUESS, an approximate\nyet low overhead approach to perform efficient coordination\nbetween cognitive radios. The fundamental contributions of\nGUESS are: (1) an FM aggregation scheme for efficient\ninnetwork aggregation, (2) a randomized gossiping approach\nwhich provides exponentially fast convergence and\nrobustness to network alterations, and (3) incremental variations\nof FM and gossip which we show can reduce the\ncommunication time by up to a factor of 2.7 and reduce network\noverhead by up to a factor of 2.4. Our preliminary\nsimulation results showcase the benefits of this approach and we\nalso outline a set of open problems that make this a new\nand exciting area of research.\n10. REFERENCES\n[1] Unlicensed Operation in the TV Broadcast Bands and\nAdditional Spectrum for Unlicensed Devices Below 900 MHz in\nthe 3 GHz band, May 2004. Notice of Proposed Rule-Making\n04-186, Federal Communications Commission.\n[2] In-Stat: Covering the Full Spectrum of Digital Communications\nMarket Research, from Vendor to End-user, December 2005.\nhttp://www.in-stat.com/catalog/scatalogue.asp?id=28.\n[3] N. Ahmed, D. Hadaller, and S. Keshav. Incremental\nMaintenance of Global Aggregates. UW. Technical Report\nCS-2006-19, University of Waterloo, ON, Canada, 2006.\n[4] R. W. Brodersen, A. Wolisz, D. Cabric, S. M. Mishra, and\nD. Willkomm. CORVUS: A Cognitive Radio Approach for\nUsage of Virtual Unlicensed Spectrum. Technical report, July\n2004.\n[5] D. Cabric, S. M. Mishra, and R. W. Brodersen. Implementation\nIssues in Spectrum Sensing for Cognitive Radios. In Asilomar\nConference, 2004.\n[6] E. Cohen and H. Kaplan. Spatially-Decaying Aggregation Over\na Network: Model and Algorithms. In Proceedings of SIGMOD\n2004, pages 707-718, New York, NY, USA, 2004. ACM Press.\n[7] P. Flajolet and G. N. Martin. Probabilistic Counting\nAlgorithms for Data Base Applications. J. Comput. Syst. Sci.,\n31(2):182-209, 1985.\n[8] C. Gkantsidis, M. Mihail, and A. Saberi. Random Walks in\nPeer-to-Peer Networks. In Proceedings of INFOCOM 2004,\npages 1229-1240, 2004.\n[9] E. Griffith. Previewing Intel\"s Cognitive Radio Chip, June 2005.\nhttp://www.internetnews.com/wireless/article.php/3513721.\n[10] D. Kempe, A. Dobra, and J. Gehrke. Gossip-Based\nComputation of Aggregate Information. In FOCS 2003, page\n482, Washington, DC, USA, 2003. IEEE Computer Society.\n[11] X. Liu and S. Shankar. Sensing-based Opportunistic Channel\nAccess. In ACM Mobile Networks and Applications\n(MONET) Journal, March 2005.\n[12] Q. Lv, P. Cao, E. Cohen, K. Li, and S. Shenker. Search and\nReplication in Unstructured Peer-to-Peer Networks. In\nProceedings of ICS, 2002.\n[13] A. Medina, A. Lakhina, I. Matta, and J. Byers. BRITE: an\nApproach to Universal Topology Generation. In Proceedings of\nMASCOTS conference, Aug. 2001.\n[14] S. M. Mishra, A. Sahai, and R. W. Brodersen. Cooperative\nSensing among Cognitive Radios. In ICC 2006, June 2006.\n[15] S. Nath, P. B. Gibbons, S. Seshan, and Z. R. Anderson.\nSynopsis Diffusion for Robust Aggregation in Sensor Networks.\nIn Proceedings of SenSys 2004, pages 250-262, 2004.\n[16] A. Sahai, N. Hoven, S. M. Mishra, and R. Tandra. Fundamental\nTradeoffs in Robust Spectrum Sensing for Opportunistic\nFrequency Reuse. Technical Report UC Berkeley, 2006.\n[17] J. Zhao, H. Zheng, and G.-H. Yang. Distributed Coordination\nin Dynamic Spectrum Allocation Networks. In Proceedings of\nDySPAN 2005, Baltimore (MD), Nov. 2005.\n17", "keywords": "coordinate spectrum sense;coordinated sensing;cognitive radio;spatially decaying aggregate;rf interference;spectrum allocation;opportunistic spectrum sharing;innetwork aggregation;incremental algorithm;rf spectrum;spectrum sensing;incremental gossip protocol;fm aggregation;gossip protocol"} {"name": "train_C-74", "title": "Adapting Asynchronous Messaging Middleware to ad-hoc Networking", "abstract": "The characteristics of mobile environments, with the possibility of frequent disconnections and fluctuating bandwidth, have forced a rethink of traditional middleware. In particular, the synchronous communication paradigms often employed in standard middleware do not appear to be particularly suited to ad-hoc environments, in which not even the intermittent availability of a backbone network can be assumed. Instead, asynchronous communication seems to be a generally more suitable paradigm for such environments. Message oriented middleware for traditional systems has been developed and used to provide an asynchronous paradigm of communication for distributed systems, and, recently, also for some specific mobile computing systems. In this paper, we present our experience in designing, implementing and evaluating EMMA (Epidemic Messaging Middleware for ad-hoc networks), an adaptation of Java Message Service (JMS) for mobile ad-hoc environments. We discuss in detail the design challenges and some possible solutions, showing a concrete example of the feasibility and suitability of the application of the asynchronous paradigm in this setting and outlining a research roadmap for the coming years.", "fulltext": "1. INTRODUCTION\nWith the increasing popularity of mobile devices and their\nwidespread adoption, there is a clear need to allow the\ndevelopment of a broad spectrum of applications that operate\neffectively over such an environment. Unfortunately, this is far\nfrom simple: mobile devices are increasingly heterogeneous\nin terms of processing capabilities, memory size, battery\ncapacity, and network interfaces. Each such configuration has\nsubstantially different characteristics that are both statically\ndifferent - for example, there is a major difference in\ncapability between a Berkeley mote and an 802.11g-equipped\nlaptop - and that vary dynamically, as in situations of\nfluctuating bandwidth and intermittent connectivity. Mobile ad\nhoc environments have an additional element of complexity\nin that they are entirely decentralised.\nIn order to craft applications for such complex\nenvironments, an appropriate form of middleware is essential if cost\neffective development is to be achieved. In this paper, we\nexamine one of the foundational aspects of middleware for\nmobile ad-hoc environments: that of the communication\nprimitives.\nTraditionally, the most frequently used middleware\nprimitives for communication assume the simultaneous presence\nof both end points on a network, since the stability and\npervasiveness of the networking infrastructure is not an\nunreasonable assumption for most wired environments. In other\nwords, most communication paradigms are synchronous:\nobject oriented middleware such as CORBA and Java RMI are\ntypical examples of middleware based on synchronous\ncommunication.\nIn recent years, there has been growing interest in\nplatforms based on asynchronous communication paradigms, such\nas publish-subscribe systems [6]: these have been exploited\nvery successfully where there is application level\nasynchronicity. From a Gartner Market Report [7]: Given\nmessageoriented-middleware\"s (MOM) popularity, scalability,\nflexibility, and affinity with mobile and wireless architectures,\nby 2004, MOM will emerge as the dominant form of\ncommunication middleware for linking mobile and enterprise\napplications (0.7 probability).... Moreover, in mobile ad-hoc\nsystems, the likelihood of network fragmentation means that\nsynchronous communication may in any case be\nimpracticable, giving situations in which delay tolerant asynchronous\ntraffic is the only form of traffic that could be supported.\n121 Middleware 2004 Companion\nMiddleware for mobile ad-hoc environments must therefore\nsupport semi-synchronous or completely asynchronous\ncommunication primitives if it is to avoid substantial\nlimitations to its utility. Aside from the intellectual challenge in\nsupporting this model, this work is also interesting because\nthere are a number of practical application domains in\nallowing inter-community communication in undeveloped\nareas of the globe. Thus, for example, projects that have been\ncarried out to help populations that live in remote places of\nthe globe such as Lapland [3] or in poor areas that lack fixed\nconnectivity infrastructure [9].\nThere have been attempts to provide mobile middleware\nwith these properties, including STEAM, LIME,\nXMIDDLE, Bayou (see [11] for a more complete review of mobile\nmiddleware). These models differ quite considerably from\nthe existing traditional middleware in terms of primitives\nprovided. Furthermore, some of them fail in providing a\nsolution for the true ad-hoc scenarios.\nIf the projected success of MOM becomes anything like\na reality, there will be many programmers with experience\nof it. The ideal solution to the problem of middleware for\nad-hoc systems is, then, to allow programmers to utilise the\nsame paradigms and models presented by common forms of\nMOM and to ensure that these paradigms are supportable\nwithin the mobile environment. This approach has clear\nadvantages in allowing applications developed on standard\nmiddleware platforms to be easily deployed on mobile\ndevices. Indeed, some research has already led to the\nadaptation of traditional middleware platforms to mobile settings,\nmainly to provide integration between mobile devices and\nexisting fixed networks in a nomadic (i.e., mixed)\nenvironment [4]. With respect to message oriented middleware, the\ncurrent implementations, however, either assume the\nexistence of a backbone network to which the mobile hosts\nconnect from time to time while roaming [10], or assume that\nnodes are always somehow reachable through a path [18].\nNo adaptation to heterogeneous or completely ad-hoc\nscenarios, with frequent disconnection and periodically isolated\nclouds of hosts, has been attempted.\nIn the remainder of this paper we describe an initial\nattempt to adapt message oriented middleware to suit mobile\nand, more specifically, mobile ad-hoc networks. In our case,\nwe elected to examine JMS, as one of the most widely known\nMOM systems. In the latter part of this paper, we explore\nthe limitations of our results and describe the plans we have\nto take the work further.\n2. MESSAGE ORIENTED MIDDLEWARE\nAND JAVA MESSAGE SERVICE (JMS)\nMessage-oriented middleware systems support\ncommunication between distributed components via message-passing:\nthe sender sends a message to identified queues, which\nusually reside on a server. A receiver retrieves the message from\nthe queue at a different time and may acknowledge the reply\nusing the same asynchronous mechanism. Message-oriented\nmiddleware thus supports asynchronous communication in\na very natural way, achieving de-coupling of senders and\nreceivers. A sender is able to continue processing as soon\nas the middleware has accepted the message; eventually,\nthe receiver will send an acknowledgment message and the\nsender will be able to collect it at a convenient time.\nHowever, given the way they are implemented, these middleware\nsystems usually require resource-rich devices, especially in\nterms of memory and disk space, where persistent queues\nof messages that have been received but not yet processed,\nare stored. Sun Java Message Service [5], IBM WebSphere\nMQ [6], Microsoft MSMQ [12] are examples of very\nsuccessful message-oriented middleware for traditional distributed\nsystems.\nThe Java Messaging Service (JMS) is a collection of\ninterfaces for asynchronous communication between distributed\ncomponents. It provides a common way for Java programs\nto create, send and receive messages. JMS users are usually\nreferred to as clients. The JMS specification further defines\nproviders as the components in charge of implementing the\nmessaging system and providing the administrative and\ncontrol functionality (i.e., persistence and reliability) required\nby the system. Clients can send and receive messages,\nasynchronously, through the JMS provider, which is in charge of\nthe delivery and, possibly, of the persistence of the messages.\nThere are two types of communication supported: point\nto point and publish-subscribe models. In the point to point\nmodel, hosts send messages to queues. Receivers can be\nregistered with some specific queues, and can asynchronously\nretrieve the messages and then acknowledge them. The\npublish-subscribe model is based on the use of topics that\ncan be subscribed to by clients. Messages are sent to topics\nby other clients and are then received in an asynchronous\nmode by all the subscribed clients. Clients learn about the\navailable topics and queues through Java Naming and\nDirectory Interface (JNDI) [14]. Queues and topics are created\nby an administrator on the provider and are registered with\nthe JNDI interface for look-up.\nIn the next section, we introduce the challenges of mobile\nnetworks, and show how JMS can be adapted to cope with\nthese requirements.\n3. JMS FOR MOBILE COMPUTING\nMobile networks vary very widely in their characteristics,\nfrom nomadic networks in which modes relocate whilst\noffline through to ad-hoc networks in which modes move freely\nand in which there is no infrastructure. Mobile ad-hoc\nnetworks are most generally applicable in situations where\nsurvivability and instant deployability are key: most notably in\nmilitary applications and disaster relief. In between these\ntwo types of \"mobile\" networks, there are, however, a number\nof possible heterogeneous combinations, where nomadic and\nad-hoc paradigms are used to interconnect totally unwired\nareas to more structured networks (such as a LAN or the\nInternet).\nWhilst the JMS specification has been extensively\nimplemented and used in traditional distributed systems,\nadaptations for mobile environments have been proposed only\nrecently. The challenges of porting JMS to mobile settings\nare considerable; however, in view of its widespread\nacceptance and use, there are considerable advantages in allowing\nthe adaptation of existing applications to mobile\nenvironments and in allowing the interoperation of applications in\nthe wired and wireless regions of a network.\nIn [10], JMS was adapted to a nomadic mobile setting,\nwhere mobile hosts can be JMS clients and communicate\nthrough the JMS provider that, however, sits on a\nbackbone network, providing reliability and persistence. The\nclient prototype presented in [10] is very lightweight, due\nto the delegation of all the heavyweight functionality to the\nMiddleware for Pervasive and ad-hoc Computing 122\nprovider on the wired network. However, this approach is\nsomewhat limited in terms of widespread applicability and\nscalability as a consequence of the concentration of\nfunctionality in the wired portion of the network.\nIf JMS is to be adapted to completely ad-hoc\nenvironments, where no fixed infrastructure is available, and where\nnodes change location and status very dynamically, more\nissues must be taken into consideration. Firstly, discovery\nneeds to use a resilient but distributed model: in this\nextremely dynamic environment, static solutions are\nunacceptable. As discussed in Section 2, a JMS administrator defines\nqueues and topics on the provider. Clients can then learn\nabout them using the Java Naming and Directory Interface\n(JNDI). However, due to the way JNDI is designed, a JNDI\nnode (or more than one) needs to be in reach in order to\nobtain a binding of a name to an address (i.e., knowing where\na specific queue/topic is). In mobile ad-hoc environments,\nthe discovery process cannot assume the existence of a fixed\nset of discovery servers that are always reachable, as this\nwould not match the dynamicity of ad-hoc networks.\nSecondly, a JMS Provider, as suggested by the JMS\nspecification, also needs to be reachable by each node in the\nnetwork, in order to communicate. This assumes a very\ncentralised architecture, which again does not match the\nrequirements of a mobile ad-hoc setting, in which nodes may\nbe moving and sparse: a more distributed and dynamic\nsolution is needed. Persistence is, however, essential\nfunctionality in asynchronous communication environments as hosts\nare, by definition, connected at different times.\nIn the following section, we will discuss our experience\nin designing and implementing JMS for mobile ad-hoc\nnetworks.\n4. JMSFOR MOBILE ad-hoc NETWORKS\n4.1 Adaptation of JMS for Mobile ad-hoc\nNetworks\nDeveloping applications for mobile networks is yet more\nchallenging: in addition to the same considerations as for\ninfrastructured wireless environments, such as the limited\ndevice capabilities and power constraints, there are issues\nof rate of change of network connectivity, and the lack of a\nstatic routing infrastructure. Consequently, we now describe\nan initial attempt to adapt the JMS specification to target\nthe particular requirements related to ad-hoc scenarios. As\ndiscussed in Section 3, a JMS application can use either the\npoint to point and the publish-subscribe styles of messaging.\nPoint to Point Model The point to point model is based\non the concept of queues, that are used to enable\nasynchronous communication between the producer of a message\nand possible different consumers. In our solution, the\nlocation of queues is determined by a negotiation process that\nis application dependent. For example, let us suppose that\nit is possible to know a priori, or it is possible to determine\ndynamically, that a certain host is the receiver of the most\npart of messages sent to a particular queue. In this case, the\noptimum location of the queue may well be on this\nparticular host. In general, it is worth noting that, according to the\nJMS specification and suggested design patterns, it is\ncommon and preferable for a client to have all of its messages\ndelivered to a single queue.\nQueues are advertised periodically to the hosts that are\nwithin transmission range or that are reachable by means of\nthe underlying synchronous communication protocol, if\nprovided. It is important to note that, at the middleware level,\nit is logically irrelevant whether or not the network layer\nimplements some form of ad-hoc routing (though considerably\nmore efficient if it does); the middleware only considers\ninformation about which nodes are actively reachable at any\npoint in time. The hosts that receive advertisement\nmessages add entries to their JNDI registry. Each entry is\ncharacterized by a lease (a mechanism similar to that present\nin Jini [15]). A lease represents the time of validity of a\nparticular entry. If a lease is not refreshed (i.e, its life is\nnot extended), it can expire and, consequently, the entry\nis deleted from the registry. In other words, the host\nassumes that the queue will be unreachable from that point\nin time. This may be caused, for example, if a host storing\nthe queue becomes unreachable. A host that initiates a\ndiscovery process will find the topics and the queues present\nin its connected portion of the network in a straightforward\nmanner.\nIn order to deliver a message to a host that is not\ncurrently in reach1\n, we use an asynchronous epidemic routing\nprotocol that will be discussed in detail in Section 4.2. If two\nhosts are in the same cloud (i.e., a connected path exists\nbetween them), but no synchronous protocol is available, the\nmessages are sent using the epidemic protocol. In this case,\nthe delivery latency will be low as a result of the rapidity of\npropagation of the infection in the connected cloud (see also\nthe simulation results in Section 5). Given the existence of\nan epidemic protocol, the discovery mechanism consists of\nadvertising the queues to the hosts that are currently\nunreachable using analogous mechanisms.\nPublish-Subscribe Model In the publish-subscribe model,\nsome of the hosts are similarly designated to hold topics and\nstore subscriptions, as before. Topics are advertised through\nthe registry in the same way as are queues, and a client\nwishing to subscribe to a topic must register with the client\nholding the topic. When a client wishes to send a message\nto the topic list, it sends it to the topic holder (in the same\nway as it would send a message to a queue). The topic\nholder then forwards the message to all subscribers, using\nthe synchronous protocol if possible, the epidemic protocol\notherwise. It is worth noting that we use a single message\nwith multiple recipients, instead of multiple messages with\nmultiple recipients. When a message is delivered to one of\nthe subscribers, this recipient is deleted from the list. In\norder to delete the other possible replicas, we employ\nacknowledgment messages (discussed in Section 4.4), returned\nin the same way as a normal message.\nWe have also adapted the concepts of durable and non\ndurable subscriptions for ad-hoc settings. In fixed platforms,\ndurable subscriptions are maintained during the\ndisconnections of the clients, whether these are intentional or are the\nresult of failures. In traditional systems, while a durable\nsubscriber is disconnected from the server, it is responsible\nfor storing messages. When the durable subscriber\nreconnects, the server sends it all unexpired messages. The\nproblem is that, in our scenario, disconnections are the norm\n1\nIn theory, it is not possible to send a message to a peer that\nhas never been reachable in the past, since there can be no\nentry present in the registry. However, to overcome this\npossible limitation, we provide a primitive through which\ninformation can be added to the registry without using the\nnormal channels.\n123 Middleware 2004 Companion\nrather than the exception. In other words, we cannot\nconsider disconnections as failures. For these reasons, we adopt\na slightly different semantics. With respect to durable\nsubscriptions, if a subscriber becomes disconnected,\nnotifications are not stored but are sent using the epidemic\nprotocol rather than the synchronous protocol. In other words,\ndurable notifications remain valid during the possible\ndisconnections of the subscriber.\nOn the other hand, if a non-durable subscriber becomes\ndisconnected, its subscription is deleted; in other words,\nduring disconnections, notifications are not sent using the\nepidemic protocol but exploit only the synchronous protocol. If\nthe topic becomes accessible to this host again, it must make\nanother subscription in order to receive the notifications.\nUnsubscription messages are delivered in the same way\nas are subscription messages. It is important to note that\ndurable subscribers have explicitly to unsubscribe from a\ntopic in order to stop the notification process; however, all\ndurable subscriptions have a predefined expiration time in\norder to cope with the cases of subscribers that do not meet\nagain because of their movements or failures. This feature\nis clearly provided to limit the number of the unnecessary\nmessages sent around the network.\n4.2 Message Delivery using Epidemic Routing\nIn this section, we examine one possible mechanism that\nwill allow the delivery of messages in a partially connected\nnetwork. The mechanism we discuss is intended for the\npurposes of demonstrating feasibility; more efficient\ncommunication mechanisms for this environment are themselves\ncomplex, and are the subject of another paper [13].\nThe asynchronous message delivery described above is\nbased on a typical pure epidemic-style routing protocol [16].\nA message that needs to be sent is replicated on each host in\nreach. In this way, copies of the messages are quickly spread\nthrough connected networks, like an infection. If a host\nbecomes connected to another cloud of mobile nodes, during\nits movement, the message spreads through this collection\nof hosts. Epidemic-style replication of data and messages\nhas been exploited in the past in many fields starting with\nthe distributed database systems area [2].\nWithin epidemic routing, each host maintains a buffer\ncontaining the messages that it has created and the replicas\nof the messages generated by the other hosts. To improve\nthe performance, a hash-table indexes the content of the\nbuffer. When two hosts connect, the host with the smaller\nidentifier initiates a so-called anti-entropy session, sending\na list containing the unique identifiers of the messages that\nit currently stores. The other host evaluates this list and\nsends back a list containing the identifiers it is storing that\nare not present in the other host, together with the messages\nthat the other does not have. The host that has started the\nsession receives the list and, in the same way, sends the\nmessages that are not present in the other host. Should buffer\noverflow occur, messages are dropped.\nThe reliability offered by this protocol is typically best\neffort, since there is no guarantee that a message will\neventually be delivered to its recipient. Clearly, the delivery ratio\nof the protocol increases proportionally to the maximum\nallowed delay time and the buffer size in each host (interesting\nsimulation results may be found in [16]).\n4.3 Adaptation of the JMS Message Model\nIn this section, we will analyse the aspects of our\nadaptation of the specification related to the so-called JMS Message\nModel [5]. According to this, JMS messages are\ncharacterised by some properties defined using the header field,\nwhich contains values that are used by both clients and\nproviders for their delivery. The aspects discussed in the\nremainder of this section are valid for both models (point to\npoint and publish-subscribe).\nA JMS message can be persistent or non-persistent.\nAccording to the JMS specification, persistent messages must\nbe delivered with a higher degree of reliability than the\nnonpersistent ones. However, it is worth noting that it is not\npossible to ensure once-and-only-once reliability for\npersistent messages as defined in the specification, since, as we\ndiscussed in the previous subsection, the underlying epidemic\nprotocol can guarantee only best-effort delivery. However,\nclients maintain a list of the identifiers of the recently\nreceived messages to avoid the delivery of message duplicates.\nIn other words, we provide the applications with\nat-mostonce reliability for both types of messages.\nIn order to implement different levels of reliability, EMMA\ntreats persistent and non-persistent messages differently,\nduring the execution of the anti-entropy epidemic protocol. Since\nthe message buffer space is limited, persistent messages are\npreferentially replicated using the available free space. If\nthis is insufficient and non-persistent messages are present\nin the buffer, these are replaced. Only the successful\ndeliveries of the persistent messages are notified to the senders.\nAccording to the JMS specification, it is possible to assign\na priority to each message. The messages with higher\npriorities are delivered in a preferential way. As discussed above,\npersistent messages are prioritised above the non-persistent\nones. Further selection is based on their priorities. Messages\nwith higher priorities are treated in a preferential way. In\nfact, if there is not enough space to replicate all the\npersistent messages, a mechanism based on priorities is used to\ndelete and replicate non-persistent messages (and, if\nnecessary, persistent messages).\nMessages are deleted from the buffers using the expiration\ntime value that can be set by senders. This is a way to free\nspace in the buffers (one preferentially deletes older\nmessages in cases of conflict); to eliminate stale replicas in the\nsystem; and to limit the time for which destinations must\nhold message identifiers to dispose of duplicates.\n4.4 Reliability and Acknowledgment\nMechanisms\nAs already discussed, at-most-once message delivery is the\nbest that can be achieved in terms of delivery semantics in\npartially connected ad-hoc settings. However, it is\npossible to improve the reliability of the system with efficient\nacknowledgment mechanisms. EMMA provides a\nmechanism for failure notification to applications if the\nacknowledgment is not received within a given timeout (that can\nbe configured by application developers). This mechanism\nis the one that distinguishes the delivery of persistent and\nnon-persistent messages in our JMS implementation: the\ndeliveries of the former are notified to the senders, whereas\nthe latter are not.\nWe use acknowledgment messages not only to inform senders\nabout the successful delivery of messages but also to delete\nthe replicas of the delivered messages that are still present\nin the network. Each host maintains a list of the messages\nMiddleware for Pervasive and ad-hoc Computing 124\nsuccessfully delivered that is updated as part of the normal\nprocess of information exchange between the hosts. The lists\nare exchanged during the first steps of the anti-entropic\nepidemic protocol with a certain predefined frequency. In the\ncase of messages with multiple recipients, a list of the actual\nrecipients is also stored. When a host receives the list, it\nchecks its message buffer and updates it according to the\nfollowing rules: (1) if a message has a single recipient and\nit has been delivered, it is deleted from the buffer; (2) if a\nmessage has multiple recipients, the identifiers of the\ndelivered hosts are deleted from the associated list of recipients.\nIf the resulting length of the list of recipients is zero, the\nmessage is deleted from the buffer.\nThese lists have, clearly, finite dimensions and are\nimplemented as circular queues. This simple mechanism, together\nwith the use of expiration timestamps, guarantees that the\nold acknowledgment notifications are deleted from the\nsystem after a limited period of time.\nIn order to improve the reliability of EMMA, a design\nmechanism for intelligent replication of queues and topics\nbased on the context information could be developed.\nHowever this is not yet part of the current architecture of EMMA.\n5. IMPLEMENTATION AND PRELIMINARY\nEVALUATION\nWe implemented a prototype of our platform using the\nJ2ME Personal Profile. The size of the executable is about\n250KB including the JMS 1.1 jar file; this is a perfectly\nacceptable figure given the available memory of the current\nmobile devices on the market. We tested our prototype on\nHP IPaq PDAs running Linux, interconnected with\nWaveLan, and on a number of laptops with the same network\ninterface.\nWe also evaluated the middleware platform using the\nOMNET++ discrete event simulator [17] in order to explore a\nrange of mobile scenarios that incorporated a more realistic\nnumber of hosts than was achievable experimentally. More\nspecifically, we assessed the performance of the system in\nterms of delivery ratio and average delay, varying the\ndensity of population and the buffer size, and using persistent\nand non-persistent messages with different priorities.\nThe simulation results show that the EMMA\"s\nperformance, in terms of delivery ratio and delay of persistent\nmessages with higher priorities, is good. In general, it is\nevident that the delivery ratio is strongly related to the\ncorrect dimensioning of the buffers to the maximum acceptable\ndelay. Moreover, the epidemic algorithms are able to\nguarantee a high delivery ratio if one evaluates performance over\na time interval sufficient for the dissemination of the replicas\nof messages (i.e., the infection spreading) in a large portion\nof the ad-hoc network.\nOne consequence of the dimensioning problem is that\nscalability may be seriously impacted in peer-to-peer\nmiddleware for mobile computing due to the resource poverty of\nthe devices (limited memory to store temporarily messages)\nand the number of possible interconnections in ad-hoc\nsettings. What is worse is that common forms of commercial\nand social organisation (six degrees of separation) mean that\neven modest TTL values on messages will lead to widespread\nflooding of epidemic messages. This problem arises because\nof the lack of intelligence in the epidemic protocol, and can\nbe addressed by selecting carrier nodes for messages with\ngreater care. The details of this process are, however,\noutside the scope of this paper (but may be found in [13]) and do\nnot affect the foundation on which the EMMA middleware\nis based: the ability to deliver messages asynchronously.\n6. CRITICAL VIEW OF THE STATE OF\nTHE ART\nThe design of middleware platforms for mobile\ncomputing requires researchers to answer new and fundamentally\ndifferent questions; simply assuming the presence of wired\nportions of the network on which centralised functionality\ncan reside is not generalisable. Thus, it is necessary to\ninvestigate novel design principles and to devise architectural\npatterns that differ from those traditionally exploited in the\ndesign of middleware for fixed systems.\nAs an example, consider the recent cross-layering trend in\nad-hoc networking [1]. This is a way of re-thinking software\nsystems design, explicitly abandoning the classical forms of\nlayering, since, although this separation of concerns afford\nportability, it does so at the expense of potential efficiency\ngains. We believe that it is possible to view our approach\nas an instance of cross-layering. In fact, we have added the\nepidemic network protocol at middleware level and, at the\nsame time, we have used the existing synchronous network\nprotocol if present both in delivering messages (traditional\nlayering) and in informing the middleware about when\nmessages may be delivered by revealing details of the forwarding\ntables (layer violation). For this reason, we prefer to\nconsider them jointly as the communication layer of our\nplatform together providing more efficient message delivery.\nAnother interesting aspect is the exploitation of context\nand system information to improve the performance of\nmobile middleware platforms. Again, as a result of adopting\na cross-layering methodology, we are able to build systems\nthat gather information from the underlying operating\nsystem and communication components in order to allow for\nadaptation of behaviour. We can summarise this conceptual\ndesign approach by saying that middleware platforms must\nbe not only context-aware (i.e., they should be able to\nextract and analyse information from the surrounding context)\nbut also system-aware (i.e., they should be able to gather\ninformation from the software and hardware components of\nthe mobile system).\nA number of middleware systems have been developed to\nsupport ad-hoc networking with the use of asynchronous\ncommunication (such as LIME, XMIDDLE, STEAM [11]).\nIn particular, the STEAM platform is an interesting\nexample of event-based middleware for ad-hoc networks,\nproviding location-aware message delivery and an effective solution\nfor event filtering.\nA discussion of JMS, and its mobile realisation, has\nalready been conducted in Sections 4 and 2. The Swiss\ncompany Softwired has developed the first JMS middleware for\nmobile computing, called iBus Mobile [10]. The main\ncomponents of this typically infrastructure-based architecture\nare the JMS provider, the so-called mobile JMS gateway,\nwhich is deployed on a fixed host and a lightweight JMS\nclient library. The gateway is used for the communication\nbetween the application server and mobile hosts. The\ngateway is seen by the JMS provider as a normal JMS client. The\nJMS provider can be any JMS-enabled application server,\nsuch as BEA Weblogic. Pronto [19] is an example of\nmid125 Middleware 2004 Companion\ndleware system based on messaging that is specifically\ndesigned for mobile environments. The platform is composed\nof three classes of components: mobile clients implementing\nthe JMS specification, gateways that control traffic,\nguaranteeing efficiency and possible user customizations using\ndifferent plug-ins and JMS servers. Different configurations\nof these components are possible; with respect to mobile ad\nhoc networks applications, the most interesting is\nServerless JMS. The aim of this configuration is to adapt JMS\nto a decentralized model. The publish-subscribe model\nexploits the efficiency and the scalability of the underlying IP\nmulticast protocol. Unreliable and reliable message delivery\nservices are provided: reliability is provided through a\nnegative acknowledgment-based protocol. Pronto represents a\ngood solution for infrastructure-based mobile networks but\nit does not adequately target ad-hoc settings, since mobile\nnodes rely on fixed servers for the exchange of messages.\nOther MOM implemented for mobile environments exist;\nhowever, they are usually straightforward extensions of\nexisting middleware [8]. The only implementation of MOM\nspecifically designed for mobile ad-hoc networks was\ndeveloped at the University of Newcastle [18]. This work is again\na JMS adaptation; the focus of that implementation is on\ngroup communication and the use of application level\nrouting algorithms for topic delivery of messages. However, there\nare a number of differences in the focus of our work. The\nimportance that we attribute to disconnections makes\npersistence a vital requirement for any middleware that needs\nto be used in mobile ad-hoc networks. The authors of [18]\nsignal persistence as possible future work, not considering\nthe fact that routing a message to a non-connected host will\nresult in delivery failure. This is a remarkable limitation in\nmobile settings where unpredictable disconnections are the\nnorm rather than the exception.\n7. ROADMAP AND CONCLUSIONS\nAsynchronous communication is a useful communication\nparadigm for mobile ad-hoc networks, as hosts are allowed to\ncome, go and pick up messages when convenient, also taking\naccount of their resource availability (e.g., power,\nconnectivity levels). In this paper we have described the state of the\nart in terms of MOM for mobile systems. We have also\nshown a proof of concept adaptation of JMS to the extreme\nscenario of partially connected mobile ad-hoc networks.\nWe have described and discussed the characteristics and\ndifferences of our solution with respect to traditional JMS\nimplementations and the existing adaptations for mobile\nsettings. However, trade-offs between application-level routing\nand resource usage should also be investigated, as mobile\ndevices are commonly power/resource scarce. A key\nlimitation of this work is the poorly performing epidemic\nalgorithm and an important advance in the practicability of\nthis work requires an algorithm that better balances the\nneeds of efficiency and message delivery probability. We\nare currently working on algorithms and protocols that,\nexploiting probabilistic and statistical techniques on the basis\nof small amounts of exchanged information, are able to\nimprove considerably the efficiency in terms of resources\n(memory, bandwidth, etc) and the reliability of our middleware\nplatform [13].\nOne futuristic research development, which may take these\nideas of adaptation of messaging middleware for mobile\nenvironments further is the introduction of more mobility\noriented communication extensions, for instance the support\nof geocast (i.e., the ability to send messages to specific\ngeographical areas).\n8. REFERENCES\n[1] M. Conti, G. Maselli, G. Turi, and S. Giordano.\nCross-layering in Mobile ad-hoc Network Design. IEEE\nComputer, 37(2):48-51, February 2004.\n[2] A. Demers, D. Greene, C. Hauser, W. Irish, J. Larson,\nS. Shenker, H. Sturgis, D. Swinehart, and D. Terry.\nEpidemic Algorithms for Replicated Database\nMaintenance. In Sixth Symposium on Principles of\nDistributed Computing, pages 1-12, August 1987.\n[3] A. Doria, M. Uden, and D. P. Pandey. Providing\nconnectivity to the Saami nomadic community. In\nProceedings of the Second International Conference on\nOpen Collaborative Design for Sustainable Innovation,\nDecember 2002.\n[4] M. Haahr, R. Cunningham, and V. Cahill. Supporting\nCORBA applications in a Mobile Environment. In 5th\nInternational Conference on Mobile Computing and\nNetworking (MOBICOM99), pages 36-47. ACM, August\n1999.\n[5] M. Hapner, R. Burridge, R. Sharma, J. Fialli, and\nK. Stout. Java Message Service Specification Version 1.1.\nSun Microsystems, Inc., April 2002.\nhttp://java.sun.com/products/jms/.\n[6] J. Hart. WebSphere MQ: Connecting your applications\nwithout complex programming. IBM WebSphere Software\nWhite Papers, 2003.\n[7] S. Hayward and M. Pezzini. Marrying Middleware and\nMobile Computing. Gartner Group Research Report,\nSeptember 2001.\n[8] IBM. WebSphere MQ EveryPlace Version 2.0, November\n2002. http://www-3.ibm.com/software/integration/wmqe/.\n[9] ITU. Connecting remote communities. Documents of the\nWorld Summit on Information Society, 2003.\nhttp://www.itu.int/osg/spu/wsis-themes.\n[10] S. Maffeis. Introducing Wireless JMS. Softwired AG,\nwww.sofwired-inc.com, 2002.\n[11] C. Mascolo, L. Capra, and W. Emmerich. Middleware for\nMobile Computing. In E. Gregori, G. Anastasi, and\nS. Basagni, editors, Advanced Lectures on Networking,\nvolume 2497 of Lecture Notes in Computer Science, pages\n20-58. Springer Verlag, 2002.\n[12] Microsoft. Microsoft Message Queuing (MSMQ) Version\n2.0 Documentation.\n[13] M. Musolesi, S. Hailes, and C. Mascolo. Adaptive routing\nfor intermittently connected mobile ad-hoc networks.\nTechnical report, UCL-CS Research Note, July 2004.\nSubmitted for Publication.\n[14] Sun Microsystems. Java Naming and Directory Interface\n(JNDI) Documentation Version 1.2. 2003.\nhttp://java.sun.com/products/jndi/.\n[15] Sun Microsystems. Jini Specification Version 2.0, 2003.\nhttp://java.sun.com/products/jini/.\n[16] A. Vahdat and D. Becker. Epidemic routing for Partially\nConnected ad-hoc Networks. Technical Report CS-2000-06,\nDepartment of Computer Science, Duke University, 2000.\n[17] A. Vargas. The OMNeT++ discrete event simulation\nsystem. In Proceedings of the European Simulation\nMulticonference (ESM\"2001), Prague, June 2001.\n[18] E. Vollset, D. Ingham, and P. Ezhilchelvan. JMS on Mobile\nad-hoc Networks. In Personal Wireless Communications\n(PWC), pages 40-52, Venice, September 2003.\n[19] E. Yoneki and J. Bacon. Pronto: Mobilegateway with\npublish-subscribe paradigm over wireless network.\nTechnical Report 559, University of Cambridge, Computer\nLaboratory, February 2003.\nMiddleware for Pervasive and ad-hoc Computing 126", "keywords": "mobile ad-hoc network;asynchronous messaging middleware;context awareness;epidemic protocol;message-oriented middleware;message orient middleware;asynchronous communication;middleware for mobile computing;mobile ad-hoc environment;cross-layering;group communication;application level routing;java messaging service;epidemic messaging middleware"} {"name": "train_C-75", "title": "Composition of a DIDS by Integrating Heterogeneous IDSs on Grids", "abstract": "This paper considers the composition of a DIDS (Distributed Intrusion Detection System) by integrating heterogeneous IDSs (Intrusion Detection Systems). A Grid middleware is used for this integration. In addition, an architecture for this integration is proposed and validated through simulation.", "fulltext": "1. INTRODUCTION\nSolutions for integrating heterogeneous IDSs (Intrusion Detection\nSystems) have been proposed by several groups [6],[7],[11],[2].\nSome reasons for integrating IDSs are described by the IDWG\n(Intrusion Detection Working Group) from the IETF (Internet\nEngineering Task Force) [12] as follows:\n\u2022 Many IDSs available in the market have strong and weak\npoints, which generally make necessary the deployment of\nmore than one IDS to provided an adequate solution.\n\u2022 Attacks and intrusions generally originate from multiple\nnetworks spanning several administrative domains; these\ndomains usually utilize different IDSs. The integration of\nIDSs is then needed to correlate information from multiple\nnetworks to allow the identification of distributed attacks and\nor intrusions.\n\u2022 The interoperability/integration of different IDS components\nwould benefit the research on intrusion detection and speed\nup the deployment of IDSs as commercial products.\nDIDSs (Distributed Intrusion Detection Systems) therefore started\nto emerge in early 90s [9] to allow the correlation of intrusion\ninformation from multiple hosts, networks or domains to detect\ndistributed attacks. Research on DIDSs has then received much\ninterest, mainly because centralised IDSs are not able to provide\nthe information needed to prevent such attacks [13].\nHowever, the realization of a DIDS requires a high degree of\ncoordination. Computational Grids are appealing as they enable\nthe development of distributed application and coordination in a\ndistributed environment. Grid computing aims to enable\ncoordinate resource sharing in dynamic groups of individuals\nand/or organizations. Moreover, Grid middleware provides means\nfor secure access, management and allocation of remote resources;\nresource information services; and protocols and mechanisms for\ntransfer of data [4].\nAccording to Foster et al. [4], Grids can be viewed as a set of\naggregate services defined by the resources that they share. OGSA\n(Open Grid Service Architecture) provides the foundation for this\nservice orientation in computational Grids. The services in OGSA\nare specified through well-defined, open, extensible and\nplatformindependent interfaces, which enable the development of\ninteroperable applications.\nThis article proposes a model for integration of IDSs by using\ncomputational Grids. The proposed model enables heterogeneous\nIDSs to work in a cooperative way; this integration is termed\nDIDSoG (Distributed Intrusion Detection System on Grid). Each\nof the integrated IDSs is viewed by others as a resource accessed\nthrough the services that it exposes. A Grid middleware provides\nseveral features for the realization of a DIDSoG, including [3]:\ndecentralized coordination of resources; use of standard protocols\nand interfaces; and the delivery of optimized QoS (Quality of\nService).\nThe service oriented architecture followed by Grids (OGSA)\nallows the definition of interfaces that are adaptable to different\nplatforms. Different implementations can be encapsulated by a\nservice interface; this virtualisation allows the consistent access to\nresources in heterogeneous environments [3]. The virtualisation of\nthe environment through service interfaces allows the use of\nservices without the knowledge of how they are actually\nimplemented. This characteristic is important for the integration\nof IDSs as the same service interfaces can be exposed by different\nIDSs.\nGrid middleware can thus be used to implement a great variety of\nservices. Some functions provided by Grid middleware are [3]: (i)\ndata management services, including access services, replication,\nand localisation; (ii) workflow services that implement coordinate\nexecution of multiple applications on multiple resources; (iii)\nauditing services that perform the detection of frauds or\nintrusions; (iv) monitoring services which implement the\ndiscovery of sensors in a distributed environment and generate\nalerts under determined conditions; (v) services for identification\nof problems in a distributed environment, which implement the\ncorrelation of information from disparate and distributed logs.\nThese services are important for the implementation of a DIDSoG.\nA DIDS needs services for the location of and access to\ndistributed data from different IDSs. Auditing and monitoring\nservices take care of the proper needs of the DIDSs such as:\nsecure storage, data analysis to detect intrusions, discovery of\ndistributed sensors, and sending of alerts. The correlation of\ndistributed logs is also relevant because the detection of\ndistributed attacks depends on the correlation of the alert\ninformation generated by the different IDSs that compose the\nDIDSoG.\nThe next sections of this article are organized as follows. Section\n2 presents related work. The proposed model is presented in\nSection 3. Section 4 describes the development and a case study.\nResults and discussion are presented in Section 5. Conclusions\nand future work are discussed in Section 6.\n2. RELATED WORK\nDIDMA [5] is a flexible, scalable, reliable, and\nplatformindependent DIDS. DIDMA architecture allows distributed\nanalysis of events and can be easily extended by developing new\nagents. However, the integration with existing IDSs and the\ndevelopment of security components are presented as future work\n[5]. The extensibility of DIDS DIDMA and the integration with\nother IDSs are goals pursued by DIDSoG. The flexibility,\nscalability, platform independence, reliability and security\ncomponents discussed in [5] are achieved in DIDSoG by using a\nGrid platform.\nMore efficient techniques for analysis of great amounts of data in\nwide scale networks based on clustering and applicable to DIDSs\nare presented in [13]. The integration of heterogeneous IDSs to\nincrease the variety of intrusion detection techniques in the\nenvironment is mentioned as future work [13] DIDSoG thus aims\nat integrating heterogeneous IDSs [13].\nRef. [10] presents a hierarchical architecture for a DIDS;\ninformation is collected, aggregated, correlated and analysed as it\nis sent up in the hierarchy. The architecture comprises of several\ncomponents for: monitoring, correlation, intrusion detection by\nstatistics, detection by signatures and answers. Components in the\nsame level of the hierarchy cooperate with one another. The\nintegration proposed by DIDSoG also follows a hierarchical\narchitecture. Each IDS integrated to the DIDSoG offers\nfunctionalities at a given level of the hierarchy and requests\nfunctionalities from IDSs from another level. The hierarchy\npresented in [10] integrates homogeneous IDSs whereas the\nhierarchical architecture of DIDSoG integrates heterogeneous\nIDSs.\nThere are proposals on integrating computational Grids and IDSs\n[6],[7],[11],[2]. Ref. [6] and [7] propose the use of Globus\nToolkit for intrusion detection, especially for DoS (Denial of\nService) and DDoS (Distributed Denial of Service) attacks;\nGlobus is used due to the need to process great amounts of data to\ndetect these kinds of attack. A two-phase processing architecture\nis presented. The first phase aims at the detection of momentary\nattacks, while the second phase is concerned with chronic or\nperennial attacks.\nTraditional IDSs or DIDSs are generally coordinated by a central\npoint; a characteristic that leaves them prone to attacks. Leu et al.\n[6] point out that IDSs developed upon Grids platforms are less\nvulnerable to attacks because of the distribution provided for such\nplatforms. Leu et al. [6],[7] have used tools to generate several\ntypes of attacks - including TCP, ICMP and UDP flooding - and\nhave demonstrated through experimental results the advantages of\napplying computational Grids to IDSs.\nThis work proposes the development of a DIDS upon a Grid\nplatform. However, the resulting DIDS integrates heterogeneous\nIDSs whereas the DIDSs upon Grids presented by Leu et al.\n[6][7] do not consider the integration of heterogeneous IDSs. The\nprocessing in phases [6][7] is also contemplated by DIDSoG,\nwhich is enabled by the specification of several levels of\nprocessing allowed by the integration of heterogeneous IDSs.\nThe DIDS GIDA (Grid Intrusion Detection Architecture) targets\nat the detection of intrusions in a Grid environment [11]. GridSim\nGrid simulator was used for the validation of DIDS GIDA.\nHomogeneous resources were used to simplify the development\n[11]. However, the possibility of applying heterogeneous\ndetection systems is left for future work\nAnother DIDS for Grids is presented by Choon and Samsudim\n[2]. Scenarios demonstrating how a DIDS can execute on a Grid\nenvironment are presented.\nDIDSoG does not aim at detecting intrusions in a Grid\nenvironment. In contrast, DIDSoG uses the Grid to compose a\nDIDS by integrating specific IDSs; the resulting DIDS could\nhowever be used to identify attacks in a Grid environment. Local\nand distributed attacks can be detected through the integration of\ntraditional IDSs while attacks particular to Grids can be detected\nthrough the integration of Grid IDSs.\n3. THE PROPOSED MODEL\nDIDSoG presents a hierarchy of intrusion detection services; this\nhierarchy is organized through a two-dimensional vector defined\nby Scope:Complexity. The IDSs composing DIDSoG can be\norganized in different levels of scope or complexity, depending on\nits functionalities, the topology of the target environment and\nexpected results.\nFigure 1 presents a DIDSoG composed by different intrusion\ndetection services (i.e. data gathering, data aggregation, data\ncorrelation, analysis, intrusion response and management)\nprovided by different IDSs. The information flow and the\nrelationship between the levels of scope and complexity are\npresented in this figure.\nInformation about the environment (host, network or application)\nis collected by Sensors located both in user 1\"s and user 2\"s\ncomputers in domain 1. The information is sent to both simple\nAnalysers that act on the information from a single host (level\n1:1), and to aggregation and correlation services that act on\ninformation from multiple hosts from the same domain (level 2:1).\nSimple Analysers in the first scope level send the information to\nmore complex Analysers in the next levels of complexity (level 1:\nN). When an Analyser detects an intrusion, it communicates with\nCountermeasure and Monitoring services registered to its scope.\nAn Analyser can invoke a Countermeasure service that replies to a\ndetected attack, or informs a Monitoring service about the\nongoing attack, so the administrator can act accordingly.\nAggregation and correlation resources in the second scope receive\ninformation from Sensors from different users\" computers (user\n1\"s and user 2\"s) in the domain 1. These resources process the\nreceived information and send it to the analysis resources\nregistered to the first level of complexity in the second scope\n(level 2:1). The information is also sent to the aggregation and\ncorrelation resources registered in the first level of complexity in\nthe next scope (level 3:1).\nUser 1\nDomain 1\nAnalysers\nLevel 1:1\nLocal\nSensors\nAnalysers\nLevel 1:N\nAggreg.\nCorrelation\nLevel 2:1\nUser 2\nDomain 1\nLocal\nSensors\nAnalysers\nLevel 2:1\nAnalysers\nLevel 2:N\nAggreg.\nCorrelation\nLevel 3:1\nDomain 2\nMonitor\nLevel 1\nMonitor\nLevel 2\nAnalysers\nLevel 3:1\nAnalysers\nLevel 3:N\nMonitor\nLevel 3\nResponse\nLevel 1\nResponse\nLevel 2\nResponse\nLevel 3\nFig. 1. How DIDSoG works.\nThe analysis resources in the second scope act like the analysis\nresources in the first scope, directing the information to a more\ncomplex analysis resource and putting the Countermeasure and\nMonitoring resources in action in case of detected attacks.\nAggregation and correlation resources in the third scope receive\ninformation from domains 1 and 2. These resources then carry out\nthe aggregation and correlation of the information from different\ndomains and send it to the analysis resources in the first level of\ncomplexity in the third scope (level 3:1). The information could\nalso be sent to the aggregate service in the next scope in case of\nany resources registered to such level.\nThe analysis resources in the third scope act similar to the analysis\nresources in the first and second scopes, except that the analysis\nresources in the third scope act on information from multiple\ndomains.\nThe functionalities of the registered resources in each of the\nscopes and complexity level can vary from one environment to\nanother. The model allows the development of N levels of scope\nand complexity.\nFigure 2 presents the architecture of a resource participating in the\nDIDSoG. Initially, the resource registers itself to GIS (Grid\nInformation Service) so other participating resources can query\nthe services provided. After registering itself, the resource\nrequests information about other intrusion detection resources\nregistered to the GIS.\nA given resource of DIDSoG interacts with other resources, by\nreceiving data from the Source Resources, processing it, and\nsending the results to the Destination Resources, therefore\nforming a grid of intrusion detection resources.\nGrid Resource\nBaseNative\nIDS\nGrid Origin Resources\nGrid Destination Resources\nGrid Information Service\nDescri\nptor\nConnec\ntor\nFig. 2. Architecture of a resource participating of the DIDSoG.\nA resource is made up of four components: Base, Connector,\nDescriptor and Native IDS. Native IDS corresponds to the IDS\nbeing integrated to the DIDSoG. This component process the data\nreceived from the Origin Resources and generates new data to be\nsent to the Destination Resources. A Native IDS component can\nbe any tool processes information related to intrusion detection,\nincluding analysis, data gathering, data aggregation, data\ncorrelation, intrusion response or management.\nThe Descriptor is responsible for the information that identifies a\nresource and its respective Destination Resources in the DIDSoG.\nFigure 3 presents the class diagram of the stored information by\nthe Descriptor. The ResourceDescriptor class has Feature, Level,\nDataType and Target Resources type members. Feature class\nrepresents the functionalities that a resource has. Type, name and\nversion attributes refer to the functions offered by the Native IDS\ncomponent, its name and version, respectively. Level class\nidentifies the level of target and complexity in which the resource\nacts. DataType class represents the data format that the resource\naccepts to receive. DataType class is specialized by classes Text,\nXML and Binary. Class XML contains the DTDFile attribute to\nspecify the DTD file that validates the received XML.\n-ident\n-version\n-description\nResourceDescriptor\n-featureType\n-name\n-version\nFeature\n1\n1\n-type\n-version\nDataType\n-escope\n-complex\nLevel\n1\n1\nText Binary\n-DTDFile\nXML\n1\n1\nTargetResources\n1\n1 10..*\n-featureType\nResource11\n1\n1\nFig. 3. Class Diagram of the Descriptor component.\nTargetResources class represents the features of the Destination\nResources of a determined resource. This class aggregates\nResource. The Resource class identifies the characteristics of a\nDestination Resource. This identification is made through the\nfeatureType attribute and the Level and DataType classes.\nA given resource analyses the information from Descriptors from\nother resources, and compares this information with the\ninformation specified in TargetResources to know to which\nresources to send the results of its processing.\nThe Base component is responsible for the communication of a\nresource with other resources of the DIDSoG and with the Grid\nInformation Service. It is this component that registers the\nresource and the queries other resources in the GIS.\nThe Connector component is the link between Base and Native\nIDS. The information that Base receives from Origin Resources is\npassed to Connector component. The Connector component\nperforms the necessary changes in the data so that it is understood\nby Native IDS, and sends this data to Native IDS for processing.\nThe Connector component also has the responsibility of collecting\nthe information processed by Native IDS, and making the\nnecessary changes so the information can pass through the\nDIDSoG again. After these changes, Connector sends the\ninformation to the Base, which in turn sends it to the Destination\nResources in accordance with the specifications of the Descriptor\ncomponent.\n4. IMPLEMENTATION\nWe have used GridSim Toolkit 3 [1] for development and\nevaluation of the proposed model. We have used and extended\nGridSim features to model and simulate the resources and\ncomponents of DIDSoG.\nFigure 4 presents the Class diagram of the simulated DIDSoG.\nThe Simulation_DIDSoG class starts the simulation components.\nThe Simulation_User class represents a user of DIDSoG. This\nclass\" function is to initiate the processing of a resource Sensor,\nfrom where the gathered information will be sent to other\nresources. DIDSoG_GIS keeps a registry of the DIDSoG\nresources.The DIDSoG_BaseResource class implements the Base\ncomponent (see Figure 2). DIDSoG_BaseResource interacts with\nDIDSoG_Descriptor class, which represents the Descriptor\ncomponent. The DIDSoG_Descriptor class is created from an\nXML file that specifies a resource descriptor (see Figure 3).\nDIDSoG_BaseResource\nDIDSoG_Descriptor\n11\nDIDSoG_GIS\nSimulation_User\nSimulation_DIDSoG\n1\n*1*\n1\n1\nGridInformationService\nGridSim GridResource\nFig. 4. Class Diagram of the simulatated DIDSoG.\nA Connector component must be developed for each Native IDS\nintegrated to DIDSoG. The Connector component is implemented\nby creating a class derived from DIDSoG_BaseResource. The new\nclass will implement new functionalities in accordance with the\nneeds of the corresponding Native IDS.\nIn the simulation environment, data collection resources, analysis,\naggregation/correlation and generation of answers were\nintegrated. Classes were developed to simulate the processing of\neach Native IDS components associated to the resources. For each\nsimulated Native IDS a class derived from\nDIDSoG_BaseResource was developed. This class corresponds to\nthe Connector component of the Native IDS and aims at the\nintegrating the IDS to DIDSoG.\nA XML file describing each of the integrated resources is chosen\nby using the Connector component. The resulting relationship\nbetween the resources integrated to the DIDSoG, in accordance\nwith the specification of its respective descriptors, is presented in\nFigure 5.\nThe Sensor_1 and Sensor_2 resources generate simulated data in\nthe TCPDump [8] format. The generated data is directed to\nAnalyser_1 and Aggreg_Corr_1 resources, in the case of\nSensor_1, and to Aggreg_Corr_1 in the case of Sensor_2,\naccording to the specification of their descriptors.\nUser_1\nAnalyser_\n1\nLevel 1:1\nSensor_1\nAggreg_\nCorr_1\nLevel 2:1\nUser_2\nSensor_2\nAnalyser_2\nLevel 2:1\nAnalyser_3\nLevel 2:2\nTCPDump\nTCPDump\nTCPDumpAg\nTCPDumpAg\nIDMEF\nIDMEF\nIDMEF\nTCPDump\n\nCountermeasure_1\nLevel 1\n\nCountermeasure_2\nLevel 2\nFig. 5. Flow of the execution of the simulation.\nThe Native IDS of Analyser_1 generates alerts for any attempt of\nconnection to port 23. The data received from Analyser_1 had\npresented such features, generating an IDMEF (Intrusion\nDetection Message Exchange Format) alert [14]. The generated\nalert was sent to Countermeasure_1 resource, where a warning\nwas dispatched to the administrator informing him of the alert\nreceived.\nThe Aggreg_Corr_1 resource received the information generated\nby sensors 1 and 2. Its processing activities consist in correlating\nthe source IP addresses with the received data. The resultant\ninformation of the processing of Aggreg_Corr_1 was directed to\nthe Analyser_2 resource.\nThe Native IDS component of the Analyser_2 generates alerts\nwhen a source tries to connect to the same port number of\nmultiple destinations. This situation is identified by the\nAnalyser_2 in the data received from Aggreg_Corr_1 and an alert\nin IDMEF format is then sent to the Countermeasures_2 resource.\nIn addition to generating alerts in IDMEF format, Analyser_2 also\ndirects the received data to the Analyser_3, in the level of\ncomplexity 2. The Native IDS component of Analyser_3\ngenerates alerts when the transmission of ICMP messages from a\ngiven source to multiple destinations is detected. This situation is\ndetected in the data received from Analyser_2, and an IDMEF\nalert is then sent to the Countermeasure_2 resource.\nThe Countermeasure_2 resource receives the alerts generated by\nanalysers 3 and 2, in accordance with the implementation of its\nNative IDS component. Warnings on alerts received are\ndispatched to the administrator.\nThe simulation carried out demonstrates how DIDSoG works.\nSimulated data was generated to be the input for a grid of\nintrusion detection systems composed by several distinct\nresources. The resources carry out tasks such as data collection,\naggregation and analysis, and generation of alerts and warnings in\nan integrated manner.\n5. EXPERIMENT RESULTS\nThe hierarchic organization of scope and complexity provides a\nhigh degree of flexibility to the model. The DIDSoG can be\nmodelled in accordance with the needs of each environment. The\ndescriptors define data flow desired for the resulting DIDS.\nEach Native IDS is integrated to the DIDSoG through a\nConnector component. The Connector component is also flexible\nin the DIDSoG. Adaptations, conversions of data types and\nauxiliary processes that Native IDSs need are provided by the\nConnector. Filters and generation of Specific logs for each Native\nIDS or environment can also be incorporated to the Connector.\nIf the integration of a new IDS to an environment already\nconfigured is desired, it is enough to develop the Connector for\nthe desired IDS and to specify the resource Descriptor. After the\nspecification of the Connector and the Descriptor the new IDS is\nintegrated to the DIDSoG.\nThrough the definition of scopes, resources can act on data of\ndifferent source groups. For example, scope 1 can be related to a\ngiven set of hosts, scope 2 to another set of hosts, while scope 3\ncan be related to hosts from scopes 1 and 2. Scopes can be defined\naccording to the needs of each environment.\nThe complexity levels allow the distribution of the processing\nbetween several resources inside the same scope. In an analysis\ntask, for example, the search for simple attacks can be made by\nresources of complexity 1, whereas the search for more complex\nattacks, that demands more time, can be performed by resources\nof complexity 2. With this, the analysis of the data is made by two\nresources.\nThe distinction between complexity levels can also be organized\nin order to integrate different techniques of intrusion detection.\nThe complexity level 1 could be defined for analyses based on\nsignatures, which are simpler techniques; the complexity level 2\nfor techniques based on behaviour, that require greater\ncomputational power; and the complexity level 3 for intrusion\ndetection in applications, where the techniques are more specific\nand depend on more data.\nThe division of scopes and the complexity levels make the\nprocessing of the data to be carried out in phases. No resource has\nfull knowledge about the complete data processing flow. Each\nresource only knows the results of its processing and the\ndestination to which it sends the results. Resources of higher\ncomplexity must be linked to resources of lower complexity.\nTherefore, the hierarchic structure of the DIDSoG is maintained,\nfacilitating its extension and integration with other domains of\nintrusion detection.\nBy carrying out a hierarchic relationship between the several\nchosen analysers for an environment, the sensor resource is not\noverloaded with the task to send the data to all the analysers. An\ninitial analyser will exist (complexity level 1) to which the sensor\nwill send its data, and this analyser will then direct the data to the\nnext step of the processing flow. Another feature of the\nhierarchical organization is the easy extension and integration\nwith other domains. If it is necessary to add a new host (sensor) to\nthe DIDSoG, it is enough to plug it to the first hierarchy of\nresources. If it is necessary to add a new analyser, it will be in the\nscope of several domains, it is enough to relate it to another\nresource of same scope.\nThe DIDSoG allows different levels to be managed by different\nentities. For example, the first scope can be managed by the local\nuser of a host. The second scope, comprising several hosts of a\ndomain can be managed by the administrator of the domain. A\nthird entity can be responsible for managing the security of\nseveral domains in a joint way. This entity can act in the scope 3\nindependently from others.\nWith the proposed model for integration of IDSs in Grids, the\ndifferent IDSs of an environment (or multiple IDSs integrated) act\nin a cooperative manner improving the intrusion detection\nservices, mainly in two aspects. First, the information from\nmultiple sources are analysed in an integrated way to search for\ndistributed attacks. This integration can be made under several\nscopes. Second, there is a great diversity of data aggregation\ntechniques, data correlation and analysis, and intrusion response\nthat can be applied to the same environment; these techniques can\nbe organized under several levels of complexity.\n6. CONCLUSION\nThe integration of heterogeneous IDSs is important. However, the\nincompatibility and diversity of IDS solutions make such\nintegration extremely difficult. This work thus proposed a model\nfor composition of DIDS by integrating existing IDSs on a\ncomputational Grid platform (DIDSoG). IDSs in DIDSoG are\nencapsulated as Grid services for intrusion detection. A\ncomputational Grid platform is used for the integration by\nproviding the basic requirements for communication, localization,\nresource sharing and security mechanisms.\nThe components of the architecture of the DIDSoG were\ndeveloped and evaluated using the GridSim Grid simulator.\nServices for communication and localization were used to carry\nout the integration between components of different resources.\nBased on the components of the architecture, several resources\nwere modelled forming a grid of intrusion detection. The\nsimulation demonstrated the usefulness of the proposed model.\nData from the sensor resources was read and this data was used to\nfeed other resources of DIDSoG.\nThe integration of distinct IDSs could be observed through the\nsimulated environment. Resources providing different intrusion\ndetection services were integrated (e.g. analysis, correlation,\naggregation and alert). The communication and localization\nservices provided by GridSim were used to integrate components\nof different resources. Various resources were modelled following\nthe architecture components forming a grid of intrusion detection.\nThe components of DIDSoG architecture have served as base for\nthe integration of the resources presented in the simulation.\nDuring the simulation, the different IDSs cooperated with one\nanother in a distributed manner; however, in a coordinated way\nwith an integrated view of the events, having, thus, the capability\nto detect distributed attacks. This capability demonstrates that the\nIDSs integrated have resulted in a DIDS.\nRelated work presents cooperation between components of a\nspecific DIDS. Some work focus on either the development of\nDIDSs on computational Grids or the application of IDSs to\ncomputational Grids. However, none deals with the integration of\nheterogeneous IDSs. In contrast, the proposed model developed\nand simulated in this work, can shed some light into the question\nof integration of heterogeneous IDSs.\nDIDSoG presents new research opportunities that we would like\nto pursue, including: deployment of the model in a more realistic\nenvironment such as a Grid; incorporation of new security\nservices; parallel analysis of data by Native IDSs in multiple\nhosts.\nIn addition to the integration of IDSs enabled by a grid\nmiddleware, the cooperation of heterogeneous IDSs can be\nviewed as an economic problem. IDSs from different\norganizations or administrative domains need incentives for\njoining a grid of intrusion detection services and for collaborating\nwith other IDSs. The development of distributed strategy proof\nmechanisms for integration of IDSs is a challenge that we would\nlike to tackle.\n7. REFERENCES\n[1] Sulistio, A.; Poduvaly, G.; Buyya, R; and Tham, CK,\nConstructing A Grid Simulation with Differentiated Network\nService Using GridSim, Proc. of the 6th International\nConference on Internet Computing (ICOMP'05), June 27-30,\n2005, Las Vegas, USA.\n[2] Choon, O. T.; Samsudim, A. Grid-based Intrusion Detection\nSystem. The 9th\nIEEE Asia-Pacific Conference\nCommunications, September 2003.\n[3] Foster, I.; Kesselman, C.; Tuecke, S. The Physiology of the\nGrid: An Open Grid Service Architecture for Distributed\nSystem Integration. Draft June 2002. Available at\nhttp://www.globus.org/research/papers/ogsa.pdf. Access Feb.\n2006.\n[4] Foster, Ian; Kesselman, Carl; Tuecke, Steven. The anatomy\nof the Grid: enabling scalable virtual organizations.\nInternational Journal of Supercomputer Applications, 2001.\n[5] Kannadiga, P.; Zulkernine, M. DIDMA: A Distributed\nIntrusion Detection System Using Mobile Agents.\nProceedings of the IEEE Sixth International Conference on\nSoftware Engineering, Artificial Intelligence, Networking\nand Parallel/Distributed Computing, May 2005.\n[6] Leu, Fang-Yie, et al. Integrating Grid with Intrusion\nDetection. Proceedings of 19th\nIEEE AINA\"05, March 2005.\n[7] Leu, Fang-Yie, et al. A Performance-Based Grid Intrusion\nDetection System. Proceedings of the 29th\nIEEE\nCOMPSAC\"05, July 2005.\n[8] McCanne, S.; Leres, C.; Jacobson, V.; TCPdump/Libpcap,\nhttp://www.tcpdump.org/, 1994.\n[9] Snapp, S. R. et al. DIDS (Distributed Intrusion Detection\nSystem) - Motivation, Architecture and An Early Prototype.\nProceeding of the Fifteenth IEEE National Computer\nSecurity Conference. Baltimore, MD, October 1992.\n[10] Sterne, D.; et al. A General Cooperative Intrusion Detection\nArchitecture for MANETs. Proceedings of the Third IEEE\nIWIA\"05, March 2005.\n[11] Tolba, M. F. ; et al. GIDA: Toward Enabling Grid Intrusion\nDetection Systems. 5th IEEE International Symposium on\nCluster Computing and the Grid, May 2005.\n[12] Wood, M. Intrusion Detection message exchange\nrequirements. Draft-ietf-idwg-requirements-10, October\n2002. Available at\nhttp://www.ietf.org/internet-drafts/draftietf-idwg-requirements-10.txt. Access March 2006.\n[13] Zhang, Yu-Fang; Xiong, Z.; Wang, X. Distributed Intrusion\nDetection Based on Clustering. Proceedings of IEEE\nInternational Conference Machine Learning and Cybernetics,\nAugust 2005.\n[14] Curry, D.; Debar, H. Intrusion Detection Message exchange\nformat data model and Extensible Markup Language (XML)\nDocument Type Definition. Draft-ietf-idwg-idmef-xml-10,\nMarch 2006. Available at\nhttp://www.ietf.org/internetdrafts/draft-ietf-idwg-idmef-xml-16.txt.", "keywords": "system integration;gridsim grid simulator;heterogeneous intrusion detection system;distributed intrusion detection system;grid middleware;intrusion detection system;grid service for intrusion detection;ids integration;grid;open grid service architecture;integration of ids;computational grid;grid intrusion detection architecture;intrusion detection service"} {"name": "train_C-76", "title": "Assured Service Quality by Improved Fault Management Service-Oriented Event Correlation", "abstract": "The paradigm shift from device-oriented to service-oriented management has also implications to the area of event correlation. Today\"s event correlation mainly addresses the correlation of events as reported from management tools. However, a correlation of user trouble reports concerning services should also be performed. This is necessary to improve the resolution time and to reduce the effort for keeping the service agreements. We refer to such a type of correlation as service-oriented event correlation. The necessity to use this kind of event correlation is motivated in the paper. To introduce service-oriented event correlation for an IT service provider, an appropriate modeling of the correlation workflow and of the information is necessary. Therefore, we examine the process management frameworks IT Infrastructure Library (ITIL) and enhanced Telecom Operations Map (eTOM) for their contribution to the workflow modeling in this area. The different kinds of dependencies that we find in our general scenario are then used to develop a workflow for the service-oriented event correlation. The MNM Service Model, which is a generic model for IT service management proposed by the Munich Network Management (MNM) Team, is used to derive an appropriate information modeling. An example scenario, the Web Hosting Service of the Leibniz Supercomputing Center (LRZ), is used to demonstrate the application of service-oriented event correlation.", "fulltext": "1. INTRODUCTION\nIn huge networks a single fault can cause a burst of failure\nevents. To handle the flood of events and to find the root\ncause of a fault, event correlation approaches like rule-based\nreasoning, case-based reasoning or the codebook approach\nhave been developed. The main idea of correlation is to\ncondense and structure events to retrieve meaningful\ninformation. Until now, these approaches address primarily the\ncorrelation of events as reported from management tools or\ndevices. Therefore, we call them device-oriented.\nIn this paper we define a service as a set of functions\nwhich are offered by a provider to a customer at a customer\nprovider interface. The definition of a service is therefore\nmore general than the definition of a Web Service, but a\nWeb Service is included in this service definition. As\na consequence, the results are applicable for Web Services\nas well as for other kinds of services. A service level\nagreement (SLA) is defined as a contract between customer and\nprovider about guaranteed service performance.\nAs in today\"s IT environments the offering of such services\nwith an agreed service quality becomes more and more\nimportant, this change also affects the event correlation. It\nhas become a necessity for providers to offer such\nguarantees for a differentiation from other providers. To avoid SLA\nviolations it is especially important for service providers to\nidentify the root cause of a fault in a very short time or even\nact proactively. The latter refers to the case of recognizing\nthe influence of a device breakdown on the offered services.\nAs in this scenario the knowledge about services and their\nSLAs is used we call it service-oriented. It can be addressed\nfrom two directions.\nTop-down perspective: Several customers report a\nproblem in a certain time interval. Are these trouble\nreports correlated? How to identify a resource as being\nthe problem\"s root cause?\n183\nBottom-up perspective: A device (e.g., router, server)\nbreaks down. Which services, and especially which\ncustomers, are affected by this fault?\nThe rest of the paper is organized as follows. In Section\n2 we describe how event correlation is performed today and\npresent a selection of the state-of-the-art event correlation\ntechniques. Section 3 describes the motivation for\nserviceoriented event correlation and its benefits. After having\nmotivated the need for such type of correlation we use two\nwell-known IT service management models to gain\nrequirements for an appropriate workflow modeling and present\nour proposal for it (see Section 4). In Section 5 we present\nour information modeling which is derived from the MNM\nService Model. An application of the approach for a web\nhosting scenario is performed in Section 6. The last section\nconcludes the paper and presents future work.\n2. TODAY\"S EVENT CORRELATION\nTECHNIQUES\nIn [11] the task of event correlation is defined as a\nconceptual interpretation procedure in the sense that a new\nmeaning is assigned to a set of events that happen in a certain\ntime interval. We can distinguish between three aspects\nfor event correlation.\nFunctional aspect: The correlation focuses on functions\nwhich are provided by each network element. It is also\nregarded which other functions are used to provide a\nspecific function.\nTopology aspect: The correlation takes into account how\nthe network elements are connected to each other and\nhow they interact.\nTime aspect: When explicitly regarding time constraints,\na start and end time has to be defined for each event.\nThe correlation can use time relationships between the\nevents to perform the correlation. This aspect is only\nmentioned in some papers [11], but it has to be treated\nin an event correlation system.\nIn the event correlation it is also important to distinguish\nbetween the knowledge acquisition/representation and the\ncorrelation algorithm. Examples of approaches to\nknowledge acquisition/representation are Gruschke\"s dependency\ngraphs [6] and Ensel\"s dependency detection by neural\nnetworks [3]. It is also possible to find the dependencies by\nanalyzing interactions [7]. In addition, there is an approach\nto manage service dependencies with XML and to define a\nresource description framework [4].\nTo get an overview about device-oriented event correlation\na selection of several event correlation techniques being used\nfor this kind of correlation is presented.\nModel-based reasoning: Model-based reasoning (MBR)\n[15, 10, 20] represents a system by modeling each of its\ncomponents. A model can either represent a physical\nentity or a logical entity (e.g., LAN, WAN, domain,\nservice, business process). The models for physical\nentities are called functional model, while the models\nfor all logical entities are called logical model. A\ndescription of each model contains three categories of\ninformation: attributes, relations to other models, and\nbehavior. The event correlation is a result of the\ncollaboration among models.\nAs all components of a network are represented with\ntheir behavior in the model, it is possible to perform\nsimulations to predict how the whole network will\nbehave.\nA comparison in [20] showed that a large MBR system\nis not in all cases easy to maintain. It can be difficult to\nappropriately model the behavior for all components\nand their interactions correctly and completely.\nAn example system for MBR is NetExpert[16] from\nOSI which is a hybrid MBR/RBR system (in 2000 OSI\nwas acquired by Agilent Technologies).\nRule-based reasoning: Rule-based reasoning (RBR) [15,\n10] uses a set of rules for event correlation. The rules\nhave the form conclusion if condition. The condition\nuses received events and information about the system,\nwhile the conclusion contains actions which can either\nlead to system changes or use system parameters to\nchoose the next rule.\nAn advantage of the approach is that the rules are\nmore or less human-readable and therefore their effect\nis intuitive. The correlation has proved to be very fast\nin practice by using the RETE algorithm.\nIn the literature [20, 1] it is claimed that RBR\nsystems are classified as relatively inflexible. Frequent\nchanges in the modeled IT environment would lead to\nmany rule updates. These changes would have to be\nperformed by experts as no automation has currently\nbeen established. In some systems information about\nthe network topology which is needed for the event\ncorrelation is not used explicitly, but is encoded into the\nrules. This intransparent usage would make rule\nupdates for topology changes quite difficult. The system\nbrittleness would also be a problem for RBR systems.\nIt means that the system fails if an unknown case\noccurs, because the case cannot be mapped onto similar\ncases. The output of RBR systems would also be\ndifficult to predict, because of unforeseen rule interactions\nin a large rule set. According to [15] an RBR system\nis only appropriate if the domain for which it is used\nis small, nonchanging, and well understood.\nThe GTE IMPACT system [11] is an example of a\nrulebased system. It also uses MBR (GTE has merged\nwith Bell Atlantic in 1998 and is now called Verizon\n[19]).\nCodebook approach: The codebook approach [12, 21] has\nsimilarities to RBR, but takes a further step and\nencodes the rules into a correlation matrix.\nThe approach starts using a dependency graph with\ntwo kinds of nodes for the modeling. The first kind\nof nodes are the faults (denoted as problems in the\ncited papers) which have to be detected, while the\nsecond kind of nodes are observable events (symptoms in\nthe papers) which are caused by the faults or other\nevents. The dependencies between the nodes are\ndenoted as directed edges. It is possible to choose weights\nfor the edges, e.g., a weight for the probability that\n184\nfault/event A causes event B. Another possible\nweighting could indicate time dependencies. There are\nseveral possibilities to reduce the initial graph. If, e.g.,\na cyclic dependency of events exists and there are no\nprobabilities for the cycles\" edges, all events can be\ntreated as one event.\nAfter a final input graph is chosen, the graph is\ntransformed into a correlation matrix where the columns\ncontain the faults and the rows contain the events.\nIf there is a dependency in the graph, the weight of\nthe corresponding edge is put into the according\nmatrix cell. In case no weights are used, the matrix cells\nget the values 1 for dependency and 0 otherwise.\nAfterwards, a simplification can be done, where events\nwhich do not help to discriminate faults are deleted.\nThere is a trade-off between the minimization of the\nmatrix and the robustness of the results. If the matrix\nis minimized as much as possible, some faults can only\nbe distinguished by a single event. If this event cannot\nbe reliably detected, the event correlation system\ncannot discriminate between the two faults. A measure\nhow many event observation errors can be\ncompensated by the system is the Hamming distance. The\nnumber of rows (events) that can be deleted from the\nmatrix can differ very much depending on the\nrelationships [15].\nThe codebook approach has the advantage that it uses\nlong-term experience with graphs and coding. This\nexperience is used to minimize the dependency graph\nand to select an optimal group of events with respect\nto processing time and robustness against noise.\nA disadvantage of the approach could be that similar\nto RBR frequent changes in the environment make it\nnecessary to frequently edit the input graph.\nSMARTS InCharge [12, 17] is an example of such a\ncorrelation system.\nCase-based reasoning: In contrast to other approaches\ncase-based reasoning (CBR) [14, 15] systems do not\nuse any knowledge about the system structure. The\nknowledge base saves cases with their values for system\nparameters and successful recovery actions for these\ncases. The recovery actions are not performed by the\nCBR system in the first place, but in most cases by a\nhuman operator.\nIf a new case appears, the CBR system compares the\ncurrent system parameters with the system\nparameters in prior cases and tries to find a similar one. To\nidentify such a match it has to be defined for which\nparameters the cases can differ or have to be the same.\nIf a match is found, a learned action can be performed\nautomatically or the operator can be informed with a\nrecovery proposal.\nAn advantage of this approach is that the ability to\nlearn is an integral part of it which is important for\nrapid changing environments.\nThere are also difficulties when applying the approach\n[15]. The fields which are used to find a similar case\nand their importance have to be defined appropriately.\nIf there is a match with a similar case, an adaptation\nof the previous solution to the current one has to be\nfound.\nAn example system for CBR is SpectroRx from\nCabletron Systems. The part of Cabletron that developed\nSpectroRx became an independent software company\nin 2002 and is now called Aprisma Management\nTechnologies [2].\nIn this section four event correlation approaches were\npresented which have evolved into commercial event correlation\nsystems. The correlation approaches have different focuses.\nMBR mainly deals with the knowledge acquisition and\nrepresentation, while RBR and the codebook approach\npropose a correlation algorithm. The focus of CBR is its ability\nto learn from prior cases.\n3. MOTIVATION OF SERVICE-ORIENTED\nEVENT CORRELATION\nFig. 1 shows a general service scenario upon which we\nwill discuss the importance of a service-oriented correlation.\nSeveral services like SSH, a web hosting service, or a video\nconference service are offered by a provider to its customers\nat the customer provider interface. A customer can allow\nseveral users to use a subscribed service. The quality and\ncost issues of the subscribed services between a customer\nand a provider are agreed in SLAs. On the provider side\nthe services use subservices for their provisioning. In case\nof the services mentioned above such subservices are DNS,\nproxy service, and IP service. Both services and subservices\ndepend on resources upon which they are provisioned. As\ndisplayed in the figure a service can depend on more than\none resource and a resource can be used by one or more\nservices.\nSSH\nDNS\nproxy\nIP\nservice dependency resource dependency\nuser a\nuser b\nuser c\ncustomer SLA\nweb a\nvideo conf.\nSSH sun1\nprovider\nvideo conf.\nweb\nservices\nsubservices\nresources\nFigure 1: Scenario\nTo get a common understanding, we distinguish between\ndifferent types of events:\nResource event: We use the term resource event for\nnetwork events and system events. A network event refers\nto events like node up/down or link up/down whereas\nsystem events refer to events like server down or\nauthentication failure.\nService event: A service event indicates that a service\ndoes not work properly. A trouble ticket which is\ngenerated from a customer report is a kind of such an\n185\nevent. Other service events can be generated by the\nprovider of a service, if the provider himself detects a\nservice malfunction.\nIn such a scenario the provider may receive service events\nfrom customers which indicate that SSH, web hosting\nservice, and video conference service are not available. When\nreferring to the service hierarchy, the provider can conclude\nin such a case that all services depend on DNS. Therefore,\nit seems more likely that a common resource which is\nnecessary for this service does not work properly or is not\navailable than to assume three independent service failures. In\ncontrast to a resource-oriented perspective where all of the\nservice events would have to be processed separately, the\nservice events can be linked together. Their information can\nbe aggregated and processed only once. If, e.g., the problem\nis solved, one common message to the customers that their\nservices are available again is generated and distributed by\nusing the list of linked service events. This is certainly a\nsimplified example. However, it shows the general principle of\nidentifying the common subservices and common resources\nof different services.\nIt is important to note that the service-oriented\nperspective is needed to integrate service aspects, especially QoS\naspects. An example of such an aspect is that a fault does not\nlead to a total failure of a service, but its QoS parameters,\nrespectively agreed service levels, at the customer-provider\ninterface might not be met. A degradation in service\nquality which is caused by high traffic load on the backbone\nis another example. In the resource-oriented perspective it\nwould be possible to define events which indicate that there\nis a link usage higher than a threshold, but no mechanism\nhas currently been established to find out which services are\naffected and whether a QoS violation occurs.\nTo summarize, the reasons for the necessity of a\nserviceoriented event correlation are the following:\nKeeping of SLAs (top-down perspective): The time\ninterval between the first symptom (recognized either\nby provider, network management tools, or customers)\nthat a service does not perform properly and the\nverified fault repair needs to be minimized. This is\nespecially needed with respect to SLAs as such agreements\noften contain guarantees like a mean time to repair.\nEffort reduction (top-down perspective): If several\nuser trouble reports are symptoms of the same fault,\nfault processing should be performed only once and\nnot several times. If the fault has been repaired, the\naffected customers should be informed about this\nautomatically.\nImpact analysis (bottom-up perspective): In case of\na fault in a resource, its influence on the associated\nservices and affected customers can be determined. This\nanalysis can be performed for short term (when there\nis currently a resource failure) or long term (e.g.,\nnetwork optimization) considerations.\n4. WORKFLOW MODELING\nIn the following we examine the established IT process\nmanagement frameworks IT Infrastructure Library (ITIL)\nand enhanced Telecom Operations Map (eTOM). The aim is\nfind out where event correlation can be found in the process\nstructure and how detailed the frameworks currently are.\nAfter that we present our solution for a workflow modeling\nfor the service-oriented event correlation.\n4.1 IT Infrastructure Library (ITIL)\nThe British Office of Government Commerce (OGC) and\nthe IT Service Management Forum (itSMF) [9] provide a\ncollection of best practices for IT processes in the area of\nIT service management which is called ITIL. The service\nmanagement is described by 11 modules which are grouped\ninto Service Support Set (provider internal processes) and\nService Delivery Set (processes at the customer-provider\ninterface). Each module describes processes, functions, roles,\nand responsibilities as well as necessary databases and\ninterfaces. In general, ITIL describes contents, processes, and\naims at a high abstraction level and contains no information\nabout management architectures and tools.\nThe fault management is divided into Incident\nManagement process and Problem Management process.\nIncident Management: The Incident Management\ncontains the service desk as interface to customers (e.g.,\nreceives reports about service problems). In case of\nsevere errors structured queries are transferred to the\nProblem Management.\nProblem Management: The Problem Management\"s\ntasks are to solve problems, take care of keeping\npriorities, minimize the reoccurrence of problems, and to\nprovide management information. After receiving\nrequests from the Incident Management, the problem\nhas to be identified and information about necessary\ncountermeasures is transferred to the Change\nManagement.\nThe ITIL processes describe only what has to be done, but\ncontain no information how this can be actually performed.\nAs a consequence, event correlation is not part of the\nmodeling. The ITIL incidents could be regarded as input for the\nservice-oriented event correlation, while the output could be\nused as a query to the ITIL Problem Management.\n4.2 Enhanced Telecom Operations Map\n(eTOM)\nThe TeleManagement Forum (TMF) [18] is an\ninternational non-profit organization from service providers and\nsuppliers in the area of telecommunications services. Similar\nto ITIL a process-oriented framework has been developed at\nfirst, but the framework was designed for a narrower focus,\ni.e., the market of information and communications service\nproviders. A horizontal grouping into processes for\ncustomer care, service development & operations, network &\nsystems management, and partner/supplier is performed. The\nvertical grouping (fulfillment, assurance, billing) reflects the\nservice life cycle.\nIn the area of fault management three processes have been\ndefined along the horizontal process grouping.\nProblem Handling: The purpose of this process is to\nreceive trouble reports from customers and to solve them\nby using the Service Problem Management. The aim\nis also to keep the customer informed about the\ncurrent status of the trouble report processing as well as\nabout the general network status (e.g., planned\nmaintenance). It is also a task of this process to inform the\n186\nQoS/SLA management about the impact of current\nerrors on the SLAs.\nService Problem Management: In this process reports\nabout customer-affecting service failures are received\nand transformed. Their root causes are identified and\na problem solution or a temporary workaround is\nestablished. The task of the Diagnose Problem\nsubprocess is to find the root cause of the problem by\nperforming appropriate tests. Nothing is said how this\ncan be done (e.g., no event correlation is mentioned).\nResource Trouble Management: A subprocess of the\nResource Trouble Management is responsible for\nresource failure event analysis, alarm correlation &\nfiltering, and failure event detection & reporting.\nAnother subprocess is used to execute different tests to\nfind a resource failure. There is also another\nsubprocess which keeps track about the status of the trouble\nreport processing. This subprocess is similar to the\nfunctionality of a trouble ticket system.\nThe process description in eTOM is not very detailed. It\nis useful to have a check list which aspects for these processes\nhave to be taken into account, but there is no detailed\nmodeling of the relationships and no methodology for applying\nthe framework. Event correlation is only mentioned in the\nresource management, but it is not used in the service level.\n4.3 Workflow Modeling for the\nService-Oriented Event Correlation\nFig. 2 shows a general service scenario which we will use\nas basis for the workflow modeling for the service-oriented\nevent correlation. We assume that the dependencies are\nalready known (e.g., by using the approaches mentioned\nin Section 2). The provider offers different services which\ndepend on other services called subservices (service\ndependency). Another kind of dependency exists between\nservices/subservices and resources. These dependencies are\ncalled resource dependencies. These two kinds of\ndependencies are in most cases not used for the event correlation\nperformed today. This resource-oriented event correlation\ndeals only with relationships on the resource level (e.g.,\nnetwork topology).\nservice dependency\nresources\nsubservices\nprovider\nservices\nresource dependency\nFigure 2: Different kinds of dependencies for the\nservice-oriented event correlation\nThe dependencies depicted in Figure 2 reflect a situation\nwith no redundancy in the service provisioning. The\nrelationships can be seen as AND relationships. In case of\nredundancy, if e.g., a provider has 3 independent web servers,\nanother modeling (see Figure 3) should be used (OR\nrelationship). In such a case different relationships are possible.\nThe service could be seen as working properly if one of the\nservers is working or a certain percentage of them is working.\nservices\n) AND relationship b) OR relationship\nresources\nFigure 3: Modeling of no redundancy (a) and of\nredundancy (b)\nAs both ITIL and eTOM contain no description how event\ncorrelation and especially service-oriented event correlation\nshould actually be performed, we propose the following\ndesign for such a workflow (see Fig. 4). The additional\ncomponents which are not part of a device-oriented event\ncorrelation are depicted with a gray background. The workflow is\ndivided into the phases fault detection, fault diagnosis, and\nfault recovery.\nIn the fault detection phase resource events and service\nevents can be generated from different sources. The\nresource events are issued during the use of a resource, e.g.,\nvia SNMP traps. The service events are originated from\ncustomer trouble reports, which are reported via the Customer\nService Management (see below) access point. In addition\nto these two passive ways to get the events, a provider\ncan also perform active tests. These tests can either deal\nwith the resources (resource active probing) or can assume\nthe role of a virtual customer and test a service or one of its\nsubservices by performing interactions at the service access\npoints (service active probing).\nThe fault diagnosis phase is composed of three event\ncorrelation steps. The first one is performed by the resource\nevent correlator which can be regarded as the event\ncorrelator in today\"s commercial systems. Therefore, it deals only\nwith resource events. The service event correlator does a\ncorrelation of the service events, while the aggregate event\ncorrelator finally performs a correlation of both resource and\nservice events. If the correlation result in one of the\ncorrelation steps shall be improved, it is possible to go back to\nthe fault detection phase and start the active probing to get\nadditional events. These events can be helpful to confirm a\ncorrelation result or to reduce the list of possible root causes.\nAfter the event correlation an ordered list of possible root\ncauses is checked by the resource management. When the\nroot cause is found, the failure repair starts. This last step\nis performed in the fault recovery phase.\nThe next subsections present different elements of the\nevent correlation process.\n4.4 Customer Service Management and\nIntelligent Assistant\nThe Customer Service Management (CSM) access point\nwas proposed by [13] as a single interface between customer\n187\nfault\ndetection\nfault\ndiagnosis\nresource\nactive\nprobing\nresource event\nresource\nevent\ncorrelator\nresource\nmanagement\ncandidate\nlist\nfault\nrecovery\nresource\nusage\nservice\nactive\nprobing\nintelligent\nassistant\nservice\nevent\ncorrelator\naggregate\nevent\ncorrelator\nservice eventCSM AP\nFigure 4: Event correlation workflow\nand provider. Its functionality is to provide information\nto the customer about his subscribed services, e.g., reports\nabout the fulfillment of agreed SLAs. It can also be used to\nsubscribe services or to allow the customer to manage his\nservices in a restricted way. Reports about problems with a\nservice can be sent to the customer via CSM. The CSM is\nalso contained in the MNM Service Model (see Section 5).\nTo reduce the effort for the provider\"s first level support,\nan Intelligent Assistant can be added to the CSM. The\nIntelligent Assistant structures the customer\"s information about\na service problem. The information which is needed for a\npreclassification of the problem is gathered from a list of\nquestions to the customer. The list is not static as the\ncurrent question depends on the answers to prior questions or\nfrom the result of specific tests. A decision tree is used\nto structure the questions and tests. The tests allow the\ncustomer to gain a controlled access to the provider\"s\nmanagement. At the LRZ a customer of the E-Mail Service can,\ne.g., use the Intelligent Assistant to perform ping requests\nto the mail server. But also more complex requests could be\npossible, e.g., requests of a combination of SNMP variables.\n4.5 Active Probing\nActive probing is useful for the provider to check his\noffered services. The aim is to identify and react to problems\nbefore a customer notices them. The probing can be done\nfrom a customer point of view or by testing the resources\nwhich are part of the services. It can also be useful to\nperform tests of subservices (own subservices or subservices\noffered by suppliers).\nDifferent schedules are possible to perform the active\nprobing. The provider could select to test important services\nand resources in regular time intervals. Other tests could\nbe initiated by a user who traverses the decision tree of the\nIntelligent Assistant including active tests. Another\npossibility for the use of active probing is a request from the event\ncorrelator, if the current correlation result needs to be\nimproved. The results of active probing are reported via service\nor resource events to the event correlator (or if the test was\ndemanded by the Intelligent Assistant the result is reported\nto it, too). While the events that are received from\nmanagement tools and customers denote negative events (something\ndoes not work), the events from active probing should also\ncontain positive events for a better discrimination.\n4.6 Event Correlator\nThe event correlation should not be performed by a single\nevent correlator, but by using different steps. The reason\nfor this are the different characteristics of the dependencies\n(see Fig. 1).\nOn the resource level there are only relationships between\nresources (network topology, systems configuration). An\nexample for this could be a switch linking separate LANs. If\nthe switch is down, events are reported that other network\ncomponents which are located behind the switch are also not\nreachable. When correlating these events it can be figured\nout that the switch is the likely error cause. At this stage,\nthe integration of service events does not seem to be helpful.\nThe result of this step is a list of resources which could be\nthe problem\"s root cause. The resource event correlator is\nused to perform this step.\nIn the service-oriented scenario there are also service and\nresource dependencies. As next step in the event\ncorrelation process the service events should be correlated with\neach other using the service dependencies, because the\nservice dependencies have no direct relationship to the resource\nlevel. The result of this step, which is performed by the\nservice event correlator, is a list of services/subservices which\ncould contain a failure in a resource. If, e.g., there are\nservice events from customers that two services do not work\nand both services depend on a common subservice, it seems\nmore likely that the resource failure can be found inside the\nsubservice. The output of this correlation is a list of\nservices/subservices which could be affected by a failure in an\nassociated resource.\nIn the last step the aggregate event correlator matches\nthe lists from resource event correlator and service event\ncorrelator to find the problem\"s possible root cause. This is\ndone by using the resource dependencies.\nThe event correlation techniques presented in Section 2\ncould be used to perform the correlation inside the three\nevent correlators. If the dependencies can be found precisely,\nan RBR or codebook approach seems to be appropriate. A\ncase database (CBR) could be used if there are cases which\ncould not be covered by RBR or the codebook approach.\nThese cases could then be used to improve the modeling in\na way that RBR or the codebook approach can deal with\nthem in future correlations.\n5. INFORMATION MODELING\nIn this section we use a generic model for IT service\nmanagement to derive the information necessary for the event\ncorrelation process.\n5.1 MNM Service Model\nThe MNM Service Model [5] is a generic model for IT\nservice management. A distinction is made between customer\nside and provider side. The customer side contains the\nbasic roles customer and user, while the provider side contains\nthe role provider. The provider makes the service available\nto the customer side. The service as a whole is divided into\nusage which is accessed by the role user and management\nwhich is used by the role customer.\nThe model consists of two main views. The Service View\n(see Fig. 5) shows a common perspective of the service for\ncustomer and provider. Everything that is only important\n188\nfor the service realization is not contained in this view. For\nthese details another perspective, the Realization View, is\ndefined (see Fig. 6).\ncustomer domain\nsupplies supplies\nprovider domain\n\u00abrole\u00bb\nprovider\naccesses uses concludes accesses\nimplements observesrealizes\nprovides directs\nsubstantiates\nusesuses\nmanages\nimplementsrealizes\nmanages\nservice\nconcludes\nQoS\nparameters\nusage\nfunctionality\nservice\naccess point\nmanagement\nfunctionality\nservice implementation service management implementation\nservice\nagreement\ncustomersideprovidersidesideindependent\n\u00abrole\u00bb\nuser\n\u00abrole\u00bb\ncustomer\nCSM\naccess point\nservice\nclient\nCSM\nclient\nFigure 5: Service View\nThe Service View contains the service for which the\nfunctionality is defined for usage as well as for management. There\nare two access points (service access point and CSM access\npoint) where user and customer can access the usage and\nmanagement functionality, respectively. Associated to each\nservice is a list of QoS parameters which have to be met by\nthe service at the service access point. The QoS surveillance\nis performed by the management.\nprovider domain\nimplements observesrealizes\nprovides directs\nimplementsrealizes\naccesses uses concludes accesses\nusesuses\nmanages\nside independent\nside independent\nmanages\nmanages\nconcludes\nacts as\nservice implementation service management implementation\nmanages\nuses\nacts as\nservice\nlogic\nsub-service\nclient\nservice\nclient\nCSM\nclient\nuses\nresources\nusesuses\nservice\nmanagement logic\nsub-service\nmanagement client\nbasic\nmanagement functionality\n\u00abrole\u00bb\ncustomer\n\u00abrole\u00bb\nprovider\n\u00abrole\u00bb\nuser\nFigure 6: Realization View\nIn the Realization View the service implementation and the\nservice management implementation are described in detail.\nFor both there are provider-internal resources and\nsubservices. For the service implementation a service logic uses\ninternal resources (devices, knowledge, staff) and external\nsubservices to provide the service. Analogous, the service\nmanagement implementation includes a service management\nlogic using basic management functionalities [8] and external\nmanagement subservices.\nThe MNM Service Model can be used for a similar\nmodeling of the used subservices, i.e., the model can be applied\nrecursively.\nAs the service-oriented event correlation has to use\ndependencies of a service from subservices and resources, the\nmodel is used in the following to derive the needed\ninformation for service events.\n5.2 Information Modeling for Service Events\nToday\"s event correlation deals mainly with events which\nare originated from resources. Beside a resource identifier\nthese events contain information about the resource status,\ne.g., SNMP variables. To perform a service-oriented event\ncorrelation it is necessary to define events which are related\nto services. These events can be generated from the\nprovider\"s own service surveillance or from customer reports\nat the CSM interface. They contain information about the\nproblems with the agreed QoS. In our information\nmodeling we define an event superclass which contains common\nattributes (e.g., time stamp). Resource event and service\nevent inherit from this superclass.\nDerived from the MNM Service Model we define the\ninformation necessary for a service event.\nService: As a service event shall represent the problems of\na single service, a unique identification of the affected\nservice is contained here.\nEvent description: This field has to contain a description\nof the problem. Depending on the interactions at the\nservice access point (Service View) a classification of\nthe problem into different categories should be defined.\nIt should also be possible to add an informal\ndescription of the problem.\nQoS parameters: For each service QoS parameters\n(Service View) are defined between the provider and the\ncustomer. This field represents a list of these QoS\nparameters and agreed service levels. The list can help\nthe provider to set the priority of a problem with\nrespect to the service levels agreed.\nResource list: This list contains the resources (Realization\nView) which are needed to provide the service. This\nlist is used by the provider to check if one of these\nresources causes the problem.\nSubservice service event identification: In the service\nhierarchy (Realization View) the service, for which this\nservice event has been issued, may depend on\nsubservices. If there is a suspicion that one of these\nsubservices causes the problem, child service events are\nissued from this service event for the subservices. In\nsuch a case this field contains links to the\ncorresponding events.\nOther event identifications: In the event correlation\nprocess the service event can be correlated with other\nservice events or with resource events. This field then\ncontains links to other events which have been\ncorrelated to this service event. This is useful to, e.g., send a\ncommon message to all affected customers when their\nsubscribed services are available again.\nIssuer\"s identification: This field can either contain an\nidentification of the customer who reported the\nproblem, an identification of a service provider\"s employee\n189\n(in case the failure has been detected by the provider\"s\nown service active probing) or a link to a parent\nservice event. The identification is needed, if there are\nambiguities in the service event or the issuer should\nbe informed (e.g., that the service is available again).\nThe possible issuers refer to the basic roles (customer,\nprovider) in the Service Model.\nAssignee: To keep track of the processing the name and\naddress of the provider\"s employee who is solving or\nsolved the problem is also noted. This is a\nspecialization of the provider role in the Service Model.\nDates: This field contains key dates in the processing of the\nservice event such as initial date, problem\nidentification date, resolution date. These dates are important\nto keep track how quick the problems have been solved.\nStatus: This field represents the service event\"s actual\nstatus (e.g., active, suspended, solved).\nPriority: The priority shows which importance the service\nevent has from the provider\"s perspective. The\nimportance is derived from the service agreement, especially\nthe agreed QoS parameters (Service View).\nThe fields date, status, and other service events are not\nderived directly from the Service Model, but are necessary\nfor the event correlation process.\n6. APPLICATION OF\nSERVICE-ORIENTED EVENT CORRELATION FOR A\nWEB HOSTING SCENARIO\nThe Leibniz Supercomputing Center is the joint\ncomputing center for the Munich universities and research\ninstitutions. It also runs the Munich Scientific Network and offers\nrelated services. One of these services is the Virtual WWW\nServer, a web hosting offer for smaller research institutions.\nIt currently has approximately 200 customers.\nA subservice of the Virtual WWW Server is the\nStorage Service which stores the static and dynamic web pages\nand uses caching techniques for a fast access. Other\nsubservices are DNS and IP service. When a user accesses a\nhosted web site via one of the LRZ\"s Virtual Private\nNetworks the VPN service is also used. The resources of the\nVirtual WWW Server include a load balancer and 5\nredundant servers. The network connections are also part of the\nresources as well as the Apache web server application\nrunning on the servers. Figure 7 shows the dependencies of the\nVirtual WWW Server.\n6.1 Customer Service Management and\nIntelligent Assistant\nThe Intelligent Assistant that is available at the Leibniz\nSupercomputing Center can currently be used for\nconnectivity or performance problems or problems with the LRZ\nE-Mail Service. A selection of possible customer problem\nreports for the Virtual WWW Server is given in the\nfollowing:\n\u2022 The hosted web site is not reachable.\n\u2022 The web site access is (too) slow.\n\u2022 The web site contains outdated content.\nserver\nserverserver\nserver\nserver server\nserver\nserver\nserver\noutgoing\nconnection\nhosting of LRZ\"s\nown pages\ncontent\ncaching\nserver\nemergency\nserver\nwebmail\nserver dynamic\nweb pages\nstatic\nweb pages\nDNS ProxyIP Storage\nResources:\nServices:\nVirtual WWW Server\nfive redundant servers\nAFS\nNFS\nDBload balancer\nFigure 7: Dependencies of the Virtual WWW\nServer\n\u2022 The transfer of new content to the LRZ does not\nchange the provided content.\n\u2022 The web site looks strange (e.g., caused by problems\nwith HTML version)\nThis customer reports have to be mapped onto failures\nin resources. For, e.g., an unreachable web site different\nroot causes are possible like a DNS problem, connectivity\nproblem, wrong configuration of the load balancer.\n6.2 Active Probing\nIn general, active probing can be used for services or\nresources. For the service active probing of the Virtual WWW\nServer a virtual customer could be installed. This customer\ndoes typical HTTP requests of web sites and compares the\nanswer with the known content. To check the up-to-dateness\nof a test web site, the content could contain a time stamp.\nThe service active probing could also include the testing of\nsubservices, e.g., sending requests to the DNS.\nThe resource active probing performs tests of the resources.\nExamples are connectivity tests, requests to application\nprocesses, and tests of available disk space.\n6.3 Event Correlation for the Virtual WWW\nServer\nFigure 8 shows the example processing. At first, a\ncustomer who takes a look at his hosted web site reports that\nthe content that he had changed is not displayed correctly.\nThis report is transferred to the service management via\nthe CSM interface. An Intelligent Assistant could be used\nto structure the customer report. The service management\ntranslates the customer report into a service event.\nIndependent from the customer report the service\nprovider\"s own service active probing tries to change the content\nof a test web site. Because this is not possible, a service\nevent is issued.\nMeanwhile, a resource event has been reported to the\nevent correlator, because an access of the content caching\nserver to one of the WWW servers failed. As there are no\nother events at the moment the resource event correlation\n190\ncustomer CSM\nservice\nmgmt\nevent\ncorrelator\nresource\nmgmt\ncustomer reports:\n\"web site content\nnot up\u2212to\u2212date\"\nservice active\nprobing reports:\n\"web site content\nchange not\npossible\"\nevent:\n\"retrieval of server\ncontent failed\"event forward\nresource\nevent\ncorrelation\nservice\nevent\ncorrelation\naggregate\nevent\ncorrelation\nlink failure\nreport\nevent forward\ncheck WWW server\ncheck link\nresult display\nlink repair\nresult display\nresult forward\ncustomer report\nFigure 8: Example processing of a customer report\ncannot correlate this event to other events. At this stage\nit would be possible that the event correlator asks the\nresource management to perform an active probing of related\nresources.\nBoth service events are now transferred to the service\nevent correlator and are correlated. From the correlation\nof these events it seems likely that either the WWW server\nitself or the link to the WWW server is the problem\"s root\ncause. A wrong web site update procedure inside the\ncontent caching server seems to be less likely as this would only\nexplain the customer report and not the service active\nprobing result. At this stage a service active probing could be\nstarted, but this does not seem to be useful as this\ncorrelation only deals with the Web Hosting Service and its\nresources and not with other services.\nAfter the separate correlation of both resource and service\nevents, which can be performed in parallel, the aggregate\nevent correlator is used to correlate both types of events.\nThe additional resource event makes it seem much more\nlikely that the problems are caused by a broken link to the\nWWW server or by the WWW server itself and not by the\ncontent caching server. In this case the event correlator asks\nthe resource management to check the link and the WWW\nserver. The decision between these two likely error causes\ncan not be further automated here.\nLater, the resource management finds out that a broken\nlink is the failure\"s root cause. It informs the event correlator\nabout this and it can be determined that this explains all\nprevious events. Therefore, the event correlation can be\nstopped at this point.\nDepending on the provider\"s customer relationship\nmanagement the finding of the root cause and an expected repair\ntime could be reported to the customers. After the link has\nbeen repaired, it is possible to report this event via the CSM\ninterface.\nEven though many details of this event correlation process\ncould also be performed differently, the example showed an\nimportant advantage of the service-oriented event\ncorrelation. The relationship between the service provisioning and\nthe provider\"s resources is explicitly modeled. This allows a\nmapping of the customer report onto the provider-internal\nresources.\n6.4 Event Correlation for Different Services\nIf a provider like the LRZ offers several services the\nserviceoriented event correlation can be used to reveal relationships\nthat are not obvious in the first place. If the LRZ E-Mail\nService and its events are viewed in relationship with the\nevents for the Virtual WWW Server, it is possible to identify\nfailures in common subservices and resources. Both services\ndepend on the DNS which means that customer reports like\nI cannot retrieve new e-mail and The web site of my\nresearch institute is not available can have a common cause,\ne.g., the DNS does not work properly.\n7. CONCLUSION AND FUTURE WORK\nIn our paper we showed the need for a service-oriented\nevent correlation. For an IT service provider this new kind\nof event correlation makes it possible to automatically map\nproblems with the current service quality onto resource\nfailures. This helps to find the failure\"s root cause earlier and\nto reduce costs for SLA violations. In addition, customer\nreports can be linked together and therefore the processing\neffort can be reduced.\nTo receive these benefits we presented our approach for\nperforming the service-oriented event correlation as well as\na modeling of the necessary correlation information. In the\nfuture we are going to apply our workflow and information\nmodeling for services offered by the Leibniz Supercomputing\nCenter going further into details.\nSeveral issues have not been treated in detail so far, e.g.,\nthe consequences for the service-oriented event correlation if\na subservice is offered by another provider. If a service does\nnot perform properly, it has to be determined whether this\nis caused by the provider himself or by the subservice. In\nthe latter case appropriate information has to be exchanged\nbetween the providers via the CSM interface. Another issue\nis the use of active probing in the event correlation process\nwhich can improve the result, but can also lead to a\ncorrelation delay.\nAnother important point is the precise definition of\ndependency which has also been left out by many other\npublications. To avoid having to much dependencies in a certain\nsituation one could try to check whether the dependencies\ncurrently exist. In case of a download from a web site there\nis only a dependency from the DNS subservice at the\nbeginning, but after the address is resolved a download\nfailure is unlikely to have been caused by the DNS. Another\npossibility to reduce the dependencies is to divide a service\ninto its possible user interactions (e.g., an e-mail service into\ntransactions like get mail, sent mail, etc) and to define the\ndependencies for each user interaction.\nAcknowledgments\nThe authors wish to thank the members of the Munich\nNetwork Management (MNM) Team for helpful discussions and\nvaluable comments on previous versions of the paper. The\nMNM Team, directed by Prof. Dr. Heinz-Gerd Hegering, is a\n191\ngroup of researchers of the Munich Universities and the\nLeibniz Supercomputing Center of the Bavarian Academy of\nSciences. Its web server is located at wwwmnmteam.informatik.\nuni-muenchen.de.\n8. REFERENCES\n[1] K. Appleby, G. Goldszmidt, and M. Steinder.\nYemanja - A Layered Event Correlation Engine for\nMulti-domain Server Farms. In Proceedings of the\nSeventh IFIP/IEEE International Symposium on\nIntegrated Network Management, pages 329-344.\nIFIP/IEEE, May 2001.\n[2] Spectrum, Aprisma Corporation.\nhttp://www.aprisma.com.\n[3] C. Ensel. New Approach for Automated Generation of\nService Dependency Models. In Network Management\nas a Strategy for Evolution and Development; Second\nLatin American Network Operation and Management\nSymposium (LANOMS 2001). IEEE, August 2001.\n[4] C. Ensel and A. Keller. An Approach for Managing\nService Dependencies with XML and the Resource\nDescription Framework. Journal of Network and\nSystems Management, 10(2), June 2002.\n[5] M. Garschhammer, R. Hauck, H.-G. Hegering,\nB. Kempter, M. Langer, M. Nerb, I. Radisic,\nH. Roelle, and H. Schmidt. Towards generic Service\nManagement Concepts - A Service Model Based\nApproach. In Proceedings of the Seventh IFIP/IEEE\nInternational Symposium on Integrated Network\nManagement, pages 719-732. IFIP/IEEE, May 2001.\n[6] B. Gruschke. Integrated Event Management: Event\nCorrelation using Dependency Graphs. In Proceedings\nof the 9th IFIP/IEEE International Workshop on\nDistributed Systems: Operations & Management\n(DSOM 98). IEEE/IFIP, October 1998.\n[7] M. Gupta, A. Neogi, M. Agarwal, and G. Kar.\nDiscovering Dynamic Dependencies in Enterprise\nEnvironments for Problem Determination. In\nProceedings of the 14th IFIP/IEEE Workshop on\nDistributed Sytems: Operations and Management.\nIFIP/IEEE, October 2003.\n[8] H.-G. Hegering, S. Abeck, and B. Neumair. Integrated\nManagement of Networked Systems - Concepts,\nArchitectures and their Operational Application.\nMorgan Kaufmann Publishers, 1999.\n[9] IT Infrastructure Library, Office of Government\nCommerce and IT Service Management Forum.\nhttp://www.itil.co.uk.\n[10] G. Jakobson and M. Weissman. Alarm Correlation.\nIEEE Network, 7(6), November 1993.\n[11] G. Jakobson and M. Weissman. Real-time\nTelecommunication Network Management: Extending\nEvent Correlation with Temporal Constraints. In\nProceedings of the Fourth IEEE/IFIP International\nSymposium on Integrated Network Management, pages\n290-301. IEEE/IFIP, May 1995.\n[12] S. Kliger, S. Yemini, Y. Yemini, D. Ohsie, and\nS. Stolfo. A Coding Approach to Event Correlation. In\nProceedings of the Fourth IFIP/IEEE International\nSymposium on Integrated Network Management, pages\n266-277. IFIP/IEEE, May 1995.\n[13] M. Langer, S. Loidl, and M. Nerb. Customer Service\nManagement: A More Transparent View To Your\nSubscribed Services. In Proceedings of the 9th\nIFIP/IEEE International Workshop on Distributed\nSystems: Operations & Management (DSOM 98),\nNewark, DE, USA, October 1998.\n[14] L. Lewis. A Case-based Reasoning Approach for the\nResolution of Faults in Communication Networks. In\nProceedings of the Third IFIP/IEEE International\nSymposium on Integrated Network Management.\nIFIP/IEEE, 1993.\n[15] L. Lewis. Service Level Management for Enterprise\nNetworks. Artech House, Inc., 1999.\n[16] NETeXPERT, Agilent Technologies.\nhttp://www.agilent.com/comms/OSS.\n[17] InCharge, Smarts Corporation.\nhttp://www.smarts.com.\n[18] Enhanced Telecom Operations Map, TeleManagement\nForum. http://www.tmforum.org.\n[19] Verizon Communications. http://www.verizon.com.\n[20] H. Wietgrefe, K.-D. Tuchs, K. Jobmann, G. Carls,\nP. Froelich, W. Nejdl, and S. Steinfeld. Using Neural\nNetworks for Alarm Correlation in Cellular Phone\nNetworks. In International Workshop on Applications\nof Neural Networks to Telecommunications\n(IWANNT), May 1997.\n[21] S. Yemini, S. Kliger, E. Mozes, Y. Yemini, and\nD. Ohsie. High Speed and Robust Event Correlation.\nIEEE Communiations Magazine, 34(5), May 1996.\n192", "keywords": "service level agreement;qos;fault management;process management framework;customer service management;service management;service-oriented management;event correlation;rule-based reasoning;case-based reasoning;service-oriented event correlation"} {"name": "train_C-77", "title": "Tracking Immediate Predecessors in Distributed Computations", "abstract": "A distributed computation is usually modeled as a partially ordered set of relevant events (the relevant events are a subset of the primitive events produced by the computation). An important causality-related distributed computing problem, that we call the Immediate Predecessors Tracking (IPT) problem, consists in associating with each relevant event, on the fly and without using additional control messages, the set of relevant events that are its immediate predecessors in the partial order. So, IPT is the on-the-fly computation of the transitive reduction (i.e., Hasse diagram) of the causality relation defined by a distributed computation. This paper addresses the IPT problem: it presents a family of protocols that provides each relevant event with a timestamp that exactly identifies its immediate predecessors. The family is defined by a general condition that allows application messages to piggyback control information whose size can be smaller than n (the number of processes). In that sense, this family defines message size-efficient IPT protocols. According to the way the general condition is implemented, different IPT protocols can be obtained. Two of them are exhibited.", "fulltext": "1. INTRODUCTION\nA distributed computation consists of a set of processes\nthat cooperate to achieve a common goal. A main\ncharacteristic of these computations lies in the fact that the\nprocesses do not share a common global memory, and\ncommunicate only by exchanging messages over a\ncommunication network. Moreover, message transfer delays are finite\nbut unpredictable. This computation model defines what\nis known as the asynchronous distributed system model. It\nis particularly important as it includes systems that span\nlarge geographic areas, and systems that are subject to\nunpredictable loads. Consequently, the concepts, tools and\nmechanisms developed for asynchronous distributed systems\nreveal to be both important and general.\nCausality is a key concept to understand and master the\nbehavior of asynchronous distributed systems [18]. More\nprecisely, given two events e and f of a distributed\ncomputation, a crucial problem that has to be solved in a lot of\ndistributed applications is to know whether they are causally\nrelated, i.e., if the occurrence of one of them is a consequence\nof the occurrence of the other. The causal past of an event\ne is the set of events from which e is causally dependent.\nEvents that are not causally dependent are said to be\nconcurrent. Vector clocks [5, 16] have been introduced to allow\nprocesses to track causality (and concurrency) between the\nevents they produce. The timestamp of an event produced\nby a process is the current value of the vector clock of the\ncorresponding process. In that way, by associating vector\ntimestamps with events it becomes possible to safely decide\nwhether two events are causally related or not.\nUsually, according to the problem he focuses on, a\ndesigner is interested only in a subset of the events produced by\na distributed execution (e.g., only the checkpoint events are\nmeaningful when one is interested in determining\nconsistent global checkpoints [12]). It follows that detecting causal\ndependencies (or concurrency) on all the events of the\ndistributed computation is not desirable in all applications [7,\n15]. In other words, among all the events that may occur\nin a distributed computation, only a subset of them are\nrelevant. In this paper, we are interested in the restriction of\nthe causality relation to the subset of events defined as being\nthe relevant events of the computation.\nBeing a strict partial order, the causality relation is\ntransitive. As a consequence, among all the relevant events that\ncausally precede a given relevant event e, only a subset are\nits immediate predecessors: those are the events f such that\nthere is no relevant event on any causal path from f to e.\nUnfortunately, given only the vector timestamp associated\nwith an event it is not possible to determine which events of\nits causal past are its immediate predecessors. This comes\nfrom the fact that the vector timestamp associated with e\ndetermines, for each process, the last relevant event\nbelong210\ning to the causal past of e, but such an event is not\nnecessarily an immediate predecessor of e. However, some\napplications [4, 6] require to associate with each relevant event only\nthe set of its immediate predecessors. Those applications are\nmainly related to the analysis of distributed computations.\nSome of those analyses require the construction of the\nlattice of consistent cuts produced by the computation [15, 16].\nIt is shown in [4] that the tracking of immediate\npredecessors allows an efficient on the fly construction of this lattice.\nMore generally, these applications are interested in the very\nstructure of the causal past. In this context, the\ndetermination of the immediate predecessors becomes a major issue\n[6]. Additionally, in some circumstances, this determination\nhas to satisfy behavior constraints. If the communication\npattern of the distributed computation cannot be modified,\nthe determination has to be done without adding control\nmessages. When the immediate predecessors are used to\nmonitor the computation, it has to be done on the fly.\nWe call Immediate Predecessor Tracking (IPT) the\nproblem that consists in determining on the fly and without\nadditional messages the immediate predecessors of relevant\nevents. This problem consists actually in determining the\ntransitive reduction (Hasse diagram) of the causality graph\ngenerated by the relevant events of the computation.\nSolving this problem requires tracking causality, hence using\nvector clocks. Previous works have addressed the efficient\nimplementation of vector clocks to track causal dependence on\nrelevant events. Their aim was to reduce the size of\ntimestamps attached to messages. An efficient vector clock\nimplementation suited to systems with fifo channels is proposed\nin [19]. Another efficient implementation that does not\ndepend on channel ordering property is described in [11]. The\nnotion of causal barrier is introduced in [2, 17] to reduce\nthe size of control information required to implement causal\nmulticast. However, none of these papers considers the\nIPT problem. This problem has been addressed for the first\ntime (to our knowledge) in [4, 6] where an IPT protocol\nis described, but without correctness proof. Moreover, in\nthis protocol, timestamps attached to messages are of size\nn. This raises the following question which, to our\nknowledge, has never been answered: Are there efficient vector\nclock implementation techniques that are suitable for the IPT\nproblem?.\nThis paper has three main contributions: (1) a positive\nanswer to the previous open question, (2) the design of a\nfamily of efficient IPT protocols, and (3) a formal\ncorrectness proof of the associated protocols. From a\nmethodological point of view the paper uses a top-down approach. It\nstates abstract properties from which more concrete\nproperties and protocols are derived. The family of IPT\nprotocols is defined by a general condition that allows\napplication messages to piggyback control information whose size\ncan be smaller than the system size (i.e., smaller than the\nnumber of processes composing the system). In that sense,\nthis family defines low cost IPT protocols when we\nconsider the message size. In addition to efficiency, the proposed\napproach has an interesting design property. Namely, the\nfamily is incrementally built in three steps. The basic\nvector clock protocol is first enriched by adding to each process\na boolean vector whose management allows the processes\nto track the immediate predecessor events. Then, a general\ncondition is stated to reduce the size of the control\ninformation carried by messages. Finally, according to the way this\ncondition is implemented, three IPT protocols are obtained.\nThe paper is composed of seven sections. Sections 2\nintroduces the computation model, vector clocks and the notion\nof relevant events. Section 3 presents the first step of the\nconstruction that results in an IPT protocol in which each\nmessage carries a vector clock and a boolean array, both\nof size n (the number of processes). Section 4 improves\nthis protocol by providing the general condition that allows\na message to carry control information whose size can be\nsmaller than n. Section 5 provides instantiations of this\ncondition. Section 6 provides a simulation study comparing\nthe behaviors of the proposed protocols. Finally, Section 7\nconcludes the paper. (Due to space limitations, proofs of\nlemmas and theorems are omitted. They can be found in\n[1].)\n2. MODEL AND VECTOR CLOCK\n2.1 Distributed Computation\nA distributed program is made up of sequential local\nprograms which communicate and synchronize only by\nexchanging messages. A distributed computation describes the\nexecution of a distributed program. The execution of a local\nprogram gives rise to a sequential process. Let {P1, P2, . . . ,\nPn} be the finite set of sequential processes of the distributed\ncomputation. Each ordered pair of communicating processes\n(Pi, Pj ) is connected by a reliable channel cij through which\nPi can send messages to Pj. We assume that each message\nis unique and a process does not send messages to itself1\n.\nMessage transmission delays are finite but unpredictable.\nMoreover, channels are not necessarily fifo. Process speeds\nare positive but arbitrary. In other words, the underlying\ncomputation model is asynchronous.\nThe local program associated with Pi can include send,\nreceive and internal statements. The execution of such a\nstatement produces a corresponding send/receive/internal\nevent. These events are called primitive events. Let ex\ni\nbe the x-th event produced by process Pi. The sequence\nhi = e1\ni e2\ni . . . ex\ni . . . constitutes the history of Pi, denoted\nHi. Let H = \u222an\ni=1Hi be the set of events produced by a\ndistributed computation. This set is structured as a partial\norder by Lamport\"s happened before relation [14] (denoted\nhb\n\u2192) and defined as follows: ex\ni\nhb\n\u2192 ey\nj if and only if\n(i = j \u2227 x + 1 = y) (local precedence) \u2228\n(\u2203m : ex\ni = send(m) \u2227 ey\nj = receive(m)) (msg prec.) \u2228\n(\u2203 ez\nk : ex\ni\nhb\n\u2192 ez\nk \u2227 e z\nk\nhb\n\u2192 ey\nj ) (transitive closure).\nmax(ex\ni , ey\nj ) is a partial function defined only when ex\ni and\ney\nj are ordered. It is defined as follows: max(ex\ni , ey\nj ) = ex\ni if\ney\nj\nhb\n\u2192 ex\ni , max(ex\ni , ey\nj ) = ey\ni if ex\ni\nhb\n\u2192 ey\nj .\nClearly the restriction of\nhb\n\u2192 to Hi, for a given i, is a total\norder. Thus we will use the notation ex\ni < ey\ni iff x < y.\nThroughout the paper, we will use the following notation:\nif e \u2208 Hi is not the first event produced by Pi, then pred(e)\ndenotes the event immediately preceding e in the sequence\nHi. If e is the first event produced by Pi, then pred(e) is\ndenoted by \u22a5 (meaning that there is no such event), and\n\u2200e \u2208 Hi : \u22a5 < e. The partial order bH = (H,\nhb\n\u2192)\nconstitutes a formal model of the distributed computation it is\nassociated with.\n1\nThis assumption is only in order to get simple protocols.\n211\nP1\nP2\nP3\n[1, 1, 2]\n[1, 0, 0] [3, 2, 1]\n[1, 1, 0]\n(2, 1)\n[0, 0, 1]\n(3, 1)\n[2, 0, 1]\n(1, 1) (1, 3)(1, 2)\n(2, 2) (2, 3)\n(3, 2)\n[2, 2, 1] [2, 3, 1]\n(1, 1) (1, 2) (1, 3)\n(2, 1)\n(2, 2)\n(2, 3)\n(3, 1)\n(3, 2)\nFigure 1: Timestamped Relevant Events and Immediate Predecessors Graph (Hasse Diagram)\n2.2 Relevant Events\nFor a given observer of a distributed computation, only\nsome events are relevant2\n[7, 9, 15]. An interesting example\nof what an observation is, is the detection of predicates\non consistent global states of a distributed computation [3,\n6, 8, 9, 13, 15]. In that case, a relevant event corresponds\nto the modification of a local variable involved in the global\npredicate. Another example is the checkpointing problem\nwhere a relevant event is the definition of a local checkpoint\n[10, 12, 20].\nThe left part of Figure 1 depicts a distributed computation\nusing the classical space-time diagram. In this figure, only\nrelevant events are represented. The sequence of relevant\nevents produced by process Pi is denoted by Ri, and R =\n\u222an\ni=1Ri \u2286 H denotes the set of all relevant events. Let \u2192\nbe the relation on R defined in the following way:\n\u2200 (e, f) \u2208 R \u00d7 R : (e \u2192 f) \u21d4 (e\nhb\n\u2192 f).\nThe poset (R, \u2192) constitutes an abstraction of the\ndistributed computation [7]. In the following we consider a\ndistributed computation at such an abstraction level.\nMoreover, without loss of generality we consider that the set of\nrelevant events is a subset of the internal events (if a\ncommunication event has to be observed, a relevant internal event\ncan be generated just before a send and just after a receive\ncommunication event occurred). Each relevant event is\nidentified by a pair (process id, sequence number) (see Figure 1).\nDefinition 1. The relevant causal past of an event e \u2208\nH is the (partially ordered) subset of relevant events f such\nthat f\nhb\n\u2192 e. It is denoted \u2191 (e). We have \u2191 (e) = {f \u2208\nR | f\nhb\n\u2192 e}.\nNote that, if e \u2208 R then \u2191 (e) = {f \u2208 R | f \u2192 e}. In\nthe computation described in Figure 1, we have, for the\nevent e identified (2, 2): \u2191 (e) = {(1, 1), (1, 2), (2, 1), (3, 1)}.\nThe following properties are immediate consequences of the\nprevious definitions. Let e \u2208 H.\nCP1 If e is not a receive event then\n\u2191 (e) =\n8\n<\n:\n\u2205 if pred(e) = \u22a5,\n\u2191 (pred(e)) \u222a {pred(e)} if pred(e) \u2208 R,\n\u2191 (pred(e)) if pred(e) \u2208 R.\nCP2 If e is a receive event (of a message m) then\n\u2191 (e) =\n8\n>><\n>>:\n\u2191 (send(m)) if pred(e) = \u22a5,\n\u2191 (pred(e))\u222a \u2191 (send(m)) \u222a {pred(e)}\nif pred(e) \u2208 R,\n\u2191 (pred(e))\u222a \u2191 (send(m)) if pred(e) \u2208 R.\n2\nThose events are sometimes called observable events.\nDefinition 2. Let e \u2208 Hi. For every j such that \u2191 (e) \u2229\nRj = \u2205, the last relevant event of Pj with respect to e is:\nlastr(e, j) = max{f | f \u2208\u2191 (e) \u2229 Rj}. When \u2191 (e) \u2229 Rj = \u2205,\nlastr(e, j) is denoted by \u22a5 (meaning that there is no such\nevent).\nLet us consider the event e identified (2,2) in Figure 1. We\nhave lastr(e, 1) = (1, 2), lastr(e, 2) = (2, 1), lastr(e, 3) =\n(3, 1). The following properties relate the events lastr(e, j)\nand lastr(f, j) for all the predecessors f of e in the relation\nhb\n\u2192. These properties follow directly from the definitions.\nLet e \u2208 Hi.\nLR0 \u2200e \u2208 Hi:\nlastr(e, i) =\n8\n<\n:\n\u22a5 if pred(e) = \u22a5,\npred(e) if pred(e) \u2208 R,\nlastr(pred(e),i) if pred(e) \u2208 R.\nLR1 If e is not a receipt event: \u2200j = i :\nlastr(e, j) = lastr(pred(e),j).\nLR2 If e is a receive event of m: \u2200j = i :\nlastr(e, j) = max(lastr(pred(e),j), lastr(send(m),j)).\n2.3 Vector Clock System\nDefinition As a fundamental concept associated with the\ncausality theory, vector clocks have been introduced in 1988,\nsimultaneously and independently by Fidge [5] and Mattern\n[16]. A vector clock system is a mechanism that associates\ntimestamps with events in such a way that the\ncomparison of their timestamps indicates whether the\ncorresponding events are or are not causally related (and, if they are,\nwhich one is the first). More precisely, each process Pi has a\nvector of integers V Ci[1..n] such that V Ci[j] is the number\nof relevant events produced by Pj, that belong to the\ncurrent relevant causal past of Pi. Note that V Ci[i] counts the\nnumber of relevant events produced so far by Pi. When a\nprocess Pi produces a (relevant) event e, it associates with\ne a vector timestamp whose value (denoted e.V C) is equal\nto the current value of V Ci.\nVector Clock Implementation The following\nimplementation of vector clocks [5, 16] is based on the observation\nthat \u2200i, \u2200e \u2208 Hi, \u2200j : e.V Ci[j] = y \u21d4 lastr(e, j) = ey\nj\nwhere e.V Ci is the value of V Ci just after the occurrence\nof e (this relation results directly from the properties LR0,\nLR1, and LR2). Each process Pi manages its vector clock\nV Ci[1..n] according to the following rules:\nVC0 V Ci[1..n] is initialized to [0, . . . , 0].\nVC1 Each time it produces a relevant event e, Pi increments\nits vector clock entry V Ci[i] (V Ci[i] := V Ci[i] + 1) to\n212\nindicate it has produced one more relevant event, then\nPi associates with e the timestamp e.V C = V Ci.\nVC2 When a process Pi sends a message m, it attaches to\nm the current value of V Ci. Let m.V C denote this\nvalue.\nVC3 When Pi receives a message m, it updates its vector\nclock as follows: \u2200k : V Ci[k] := max(V Ci[k], m.V C[k]).\n3. IMMEDIATE PREDECESSORS\nIn this section, the Immediate Predecessor Tracking\n(IPT) problem is stated (Section 3.1). Then, some technical\nproperties of immediate predecessors are stated and proved\n(Section 3.2). These properties are used to design the basic\nIPT protocol and prove its correctness (Section 3.3). This\nIPT protocol, previously presented in [4] without proof, is\nbuilt from a vector clock protocol by adding the\nmanagement of a local boolean array at each process.\n3.1 The IPT Problem\nAs indicated in the introduction, some applications (e.g.,\nanalysis of distributed executions [6], detection of\ndistributed properties [7]) require to determine (on-the-fly and\nwithout additional messages) the transitive reduction of the\nrelation \u2192 (i.e., we must not consider transitive causal\ndependency). Given two relevant events f and e, we say that f\nis an immediate predecessor of e if f \u2192 e and there is no\nrelevant event g such that f \u2192 g \u2192 e.\nDefinition 3. The Immediate Predecessor Tracking\n(IPT) problem consists in associating with each relevant event\ne the set of relevant events that are its immediate\npredecessors. Moreover, this has to be done on the fly and without\nadditional control message (i.e., without modifying the\ncommunication pattern of the computation).\nAs noted in the Introduction, the IPT problem is the\ncomputation of the Hasse diagram associated with the partially\nordered set of the relevant events produced by a distributed\ncomputation.\n3.2 Formal Properties of IPT\nIn order to design a protocol solving the IPT problem, it\nis useful to consider the notion of immediate relevant\npredecessor of any event, whether relevant or not. First, we\nobserve that, by definition, the immediate predecessor on\nPj of an event e is necessarily the lastr(e, j) event.\nSecond, for lastr(e, j) to be immediate predecessor of e, there\nmust not be another lastr(e, k) event on a path between\nlastr(e, j) and e. These observations are formalized in the\nfollowing definition:\nDefinition 4. Let e \u2208 Hi. The set of immediate\nrelevant predecessors of e (denoted IP(e)), is the set of the relevant\nevents lastr(e, j) (j = 1, . . . , n) such that \u2200k : lastr(e, j) \u2208\u2191\n(lastr(e, k)).\nIt follows from this definition that IP(e) \u2286 {lastr(e, j)|j =\n1, . . . , n} \u2282\u2191 (e). When we consider Figure 1, The graph\ndepicted in its right part describes the immediate predecessors\nof the relevant events of the computation defined in its left\npart, more precisely, a directed edge (e, f) means that the\nrelevant event e is an immediate predecessor of the relevant\nevent f (3\n).\nThe following lemmas show how the set of immediate\npredecessors of an event is related to those of its predecessors\nin the relation\nhb\n\u2192. They will be used to design and prove\nthe protocols solving the IPT problem. To ease the reading\nof the paper, their proofs are presented in Appendix A.\nThe intuitive meaning of the first lemma is the following:\nif e is not a receive event, all the causal paths arriving at e\nhave pred(e) as next-to-last event (see CP1). So, if pred(e)\nis a relevant event, all the relevant events belonging to its\nrelevant causal past are separated from e by pred(e), and\npred(e) becomes the only immediate predecessor of e. In\nother words, the event pred(e) constitutes a reset w.r.t.\nthe set of immediate predecessors of e. On the other hand,\nif pred(e) is not relevant, it does not separate its relevant\ncausal past from e.\nLemma 1. If e is not a receive event, IP(e) is equal to:\n\u2205 if pred(e) = \u22a5,\n{pred(e)} if pred(e) \u2208 R,\nIP(pred(e)) if pred(e) \u2208 R.\nThe intuitive meaning of the next lemma is as follows: if\ne is a receive event receive(m), the causal paths arriving\nat e have either pred(e) or send(m) as next-to-last events.\nIf pred(e) is relevant, as explained in the previous lemma,\nthis event hides from e all its relevant causal past and\nbecomes an immediate predecessor of e. Concerning the\nlast relevant predecessors of send(m), only those that are\nnot predecessors of pred(e) remain immediate predecessors\nof e.\nLemma 2. Let e \u2208 Hi be the receive event of a message\nm. If pred(e) \u2208 Ri, then, \u2200j, IP(e) \u2229 Rj is equal to:\n{pred(e)} if j = i,\n\u2205 if lastr(pred(e),j) \u2265 lastr(send(m),j),\nIP(send(m)) \u2229 Rj if lastr(pred(e),j) < lastr(send(m),j).\nThe intuitive meaning of the next lemma is the following:\nif e is a receive event receive(m), and pred(e) is not\nrelevant, the last relevant events in the relevant causal past of e are\nobtained by merging those of pred(e) and those of send(m)\nand by taking the latest on each process. So, the\nimmediate predecessors of e are either those of pred(e) or those\nof send(m). On a process where the last relevant events\nof pred(e) and of send(m) are the same event f, none of\nthe paths from f to e must contain another relevant event,\nand thus, f must be immediate predecessor of both events\npred(e) and send(m).\nLemma 3. Let e \u2208 Hi be the receive event of a message\nm. If pred(e) \u2208 Ri, then, \u2200j, IP(e) \u2229 Rj is equal to:\nIP(pred(e)) \u2229 Rj if lastr(pred(e),j) > lastr(send(m),j),\nIP(send(m)) \u2229 Rj if lastr(pred(e),j) < lastr(send(m),j)\nIP(pred(e))\u2229IP(send(m))\u2229Rj if lastr(pred(e),j) = lastr\n(send(m), j).\n3.3 A Basic IPT Protocol\nThe basic protocol proposed here associates with each\nrelevant event e, an attribute encoding the set IP(e) of its\nimmediate predecessors. From the previous lemmas, the set\n3\nActually, this graph is the Hasse diagram of the partial\norder associated with the distributed computation.\n213\nIP(e) of any event e depends on the sets IP of the events\npred(e) and/or send(m) (when e = receive(m)). Hence the\nidea to introduce a data structure allowing to manage the\nsets IPs inductively on the poset (H,\nhb\n\u2192). To take into\naccount the information from pred(e), each process manages\na boolean array IPi such that, \u2200e \u2208 Hi the value of IPi\nwhen e occurs (denoted e.IPi) is the boolean array\nrepresentation of the set IP(e). More precisely, \u2200j : IPi[j] =\n1 \u21d4 lastr(e, j) \u2208 IP(e). As recalled in Section 2.3, the\nknowledge of lastr(e,j) (for every e and every j) is based\non the management of vectors V Ci. Thus, the set IP(e) is\ndetermined in the following way:\nIP(e) = {ey\nj | e.V Ci[j] = y \u2227 e.IPi[j] = 1, j = 1, . . . , n}\nEach process Pi updates IPi according to the Lemmas 1,\n2, and 3:\n1. It results from Lemma 1 that, if e is not a receive event,\nthe current value of IPi is sufficient to determine e.IPi.\nIt results from Lemmas 2 and 3 that, if e is a receive\nevent (e = receive(m)), then determining e.IPi\ninvolves information related to the event send(m). More\nprecisely, this information involves IP(send(m)) and\nthe timestamp of send(m) (needed to compare the\nevents lastr(send(m),j) and lastr(pred(e),j), for\nevery j). So, both vectors send(m).V Cj and send(m).IPj\n(assuming send(m) produced by Pj ) are attached to\nmessage m.\n2. Moreover, IPi must be updated upon the occurrence\nof each event. In fact, the value of IPi just after an\nevent e is used to determine the value succ(e).IPi. In\nparticular, as stated in the Lemmas, the determination\nof succ(e).IPi depends on whether e is relevant or not.\nThus, the value of IPi just after the occurrence of event\ne must keep track of this event.\nThe following protocol, previously presented in [4] without\nproof, ensures the correct management of arrays V Ci (as in\nSection 2.3) and IPi (according to the Lemmas of Section\n3.2). The timestamp associated with a relevant event e is\ndenoted e.TS.\nR0 Initialization: Both V Ci[1..n] and IPi[1..n] are\ninitialized to [0, . . . , 0].\nR1 Each time it produces a relevant event e:\n- Pi associates with e the timestamp e.TS defined\nas follows e.TS = {(k, V Ci[k]) | IPi[k] = 1},\n- Pi increments its vector clock entry V Ci[i]\n(namely it executes V Ci[i] := V Ci[i] + 1),\n- Pi resets IPi: \u2200 = i : IPi[ ] := 0; IPi[i] := 1.\nR2 When Pi sends a message m to Pj, it attaches to m\nthe current values of V Ci (denoted m.V C) and the\nboolean array IPi (denoted m.IP).\nR3 When it receives a message m from Pj , Pi executes the\nfollowing updates:\n\u2200k \u2208 [1..n] : case\nV Ci[k] < m.V C[k] thenV Ci[k] := m.V C[k];\nIPi[k] := m.IP[k]\nV Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k])\nV Ci[k] > m.V C[k] then skip\nendcase\nThe proof of the following theorem directly follows from\nLemmas 1, 2 and 3.\nTheorem 1. The protocol described in Section 3.3 solves\nthe IPT problem: for any relevant event e, the timestamp\ne.TS contains the identifiers of all its immediate\npredecessors and no other event identifier.\n4. A GENERAL CONDITION\nThis section addresses a previously open problem,\nnamely, How to solve the IPT problem without requiring each\napplication message to piggyback a whole vector clock and\na whole boolean array?. First, a general condition that\ncharacterizes which entries of vectors V Ci and IPi can be\nomitted from the control information attached to a message\nsent in the computation, is defined (Section 4.1). It is then\nshown (Section 4.2) that this condition is both sufficient and\nnecessary.\nHowever, this general condition cannot be locally\nevaluated by a process that is about to send a message. Thus,\nlocally evaluable approximations of this general condition\nmust be defined. To each approximation corresponds a\nprotocol, implemented with additional local data structures. In\nthat sense, the general condition defines a family of IPT\nprotocols, that solve the previously open problem. This issue\nis addressed in Section 5.\n4.1 To Transmit or Not to Transmit Control\nInformation\nLet us consider the previous IPT protocol (Section 3.3).\nRule R3 shows that a process Pj does not systematically\nupdate each entry V Cj[k] each time it receives a message\nm from a process Pi: there is no update of V Cj[k] when\nV Cj[k] \u2265 m.V C[k]. In such a case, the value m.V C[k] is\nuseless, and could be omitted from the control information\ntransmitted with m by Pi to Pj.\nSimilarly, some entries IPj[k] are not updated when a\nmessage m from Pi is received by Pj. This occurs when\n0 < V Cj[k] = m.V C[k] \u2227 m.IP[k] = 1, or when V Cj [k] >\nm.V C[k], or when m.V C[k] = 0 (in the latest case, as\nm.IP[k] = IPi[k] = 0 then no update of IPj[k] is necessary).\nDifferently, some other entries are systematically reset to 0\n(this occurs when 0 < V Cj [k] = m.V C[k] \u2227 m.IP[k] = 0).\nThese observations lead to the definition of the condition\nK(m, k) that characterizes which entries of vectors V Ci and\nIPi can be omitted from the control information attached\nto a message m sent by a process Pi to a process Pj:\nDefinition 5. K(m, k) \u2261\n(send(m).V Ci[k] = 0)\n\u2228 (send(m).V Ci[k] < pred(receive(m)).V Cj[k])\n\u2228\n;\n(send(m).V Ci[k] = pred(receive(m)).V Cj[k])\n\u2227(send(m).IPi[k] = 1) .\n4.2 A Necessary and Sufficient Condition\nWe show here that the condition K(m, k) is both\nnecessary and sufficient to decide which triples of the form\n(k, send(m).V Ci[k], send(m).IPi[k]) can be omitted in an\noutgoing message m sent by Pi to Pj. A triple attached to\nm will also be denoted (k, m.V C[k], m.IP[k]). Due to space\nlimitations, the proofs of Lemma 4 and Lemma 5 are given\nin [1]. (The proof of Theorem 2 follows directly from these\nlemmas.)\n214\nLemma 4. (Sufficiency) If K(m, k) is true, then the triple\n(k, m.V C[k], m.IP[k]) is useless with respect to the correct\nmanagement of IPj[k] and V Cj [k].\nLemma 5. (Necessity) If K(m, k) is false, then the triple\n(k, m.V C[k], m.IP[k]) is necessary to ensure the correct\nmanagement of IPj[k] and V Cj [k].\nTheorem 2. When a process Pi sends m to a process Pj,\nthe condition K(m, k) is both necessary and sufficient not to\ntransmit the triple (k, send(m).V Ci[k], send(m).IPi[k]).\n5. A FAMILY OF IPT PROTOCOLS BASED\nON EVALUABLE CONDITIONS\nIt results from the previous theorem that, if Pi could\nevaluate K(m, k) when it sends m to Pj, this would\nallow us improve the previous IPT protocol in the following\nway: in rule R2, the triple (k, V Ci[k], IPi[k]) is\ntransmitted with m only if \u00acK(m, k). Moreover, rule R3 is\nappropriately modified to consider only triples carried by m.\nHowever, as previously mentioned, Pi cannot locally\nevaluate K(m, k) when it is about to send m. More\nprecisely, when Pi sends m to Pj , Pi knows the exact values of\nsend(m).V Ci[k] and send(m).IPi[k] (they are the current\nvalues of V Ci[k] and IPi[k]). But, as far as the value of\npred(receive(m)).V Cj[k] is concerned, two cases are\npossible. Case (i): If pred(receive(m))\nhb\n\u2192 send(m), then Pi can\nknow the value of pred(receive(m)).V Cj[k] and\nconsequently can evaluate K(m, k). Case (ii): If pred(receive(m))\nand send(m) are concurrent, Pi cannot know the value of\npred(receive(m)).V Cj[k] and consequently cannot evaluate\nK(m, k). Moreover, when it sends m to Pj , whatever the\ncase (i or ii) that actually occurs, Pi has no way to know\nwhich case does occur. Hence the idea to define evaluable\napproximations of the general condition. Let K (m, k) be\nan approximation of K(m, k), that can be evaluated by a\nprocess Pi when it sends a message m. To be correct, the\ncondition K must ensure that, every time Pi should\ntransmit a triple (k, V Ci[k], IPi[k]) according to Theorem 2 (i.e.,\neach time \u00acK(m, k)), then Pi transmits this triple when it\nuses condition K . Hence, the definition of a correct\nevaluable approximation:\nDefinition 6. A condition K , locally evaluable by a\nprocess when it sends a message m to another process, is\ncorrect if \u2200(m, k) : \u00acK(m, k) \u21d2 \u00acK (m, k) or, equivalently,\n\u2200(m, k) : K (m, k) \u21d2 K(m, k).\nThis definition means that a protocol evaluating K to\ndecide which triples must be attached to messages, does not\nmiss triples whose transmission is required by Theorem 2.\nLet us consider the constant condition (denoted K1),\nthat is always false, i.e., \u2200(m, k) : K1(m, k) = false. This\ntrivially correct approximation of K actually corresponds\nto the particular IPT protocol described in Section 3 (in\nwhich each message carries a whole vector clock and a\nwhole boolean vector). The next section presents a better\napproximation of K (denoted K2).\n5.1 A Boolean Matrix-Based Evaluable\nCondition\nCondition K2 is based on the observation that condition\nK is composed of sub-conditions. Some of them can be\nPj\nsend(m)\nPi\nV Ci[k] = x\nIPi[k] = 1\nV Cj[k] \u2265 x receive(m)\nFigure 2: The Evaluable Condition K2\nlocally evaluated while the others cannot. More\nprecisely, K \u2261 a \u2228 \u03b1 \u2228 (\u03b2 \u2227 b), where a \u2261 send(m).V Ci[k] = 0\nand b \u2261 send(m).IPi[k] = 1 are locally evaluable,\nwhereas \u03b1 \u2261 send(m).V Ci[k] < pred(receive(m)).V Cj[k] and\n\u03b2 \u2261 send(m).V Ci[k] = pred(receive(m)).V Cj[k] are not.\nBut, from easy boolean calculus, a\u2228((\u03b1\u2228\u03b2)\u2227b) =\u21d2 a\u2228\u03b1\u2228\n(\u03b2 \u2227 b) \u2261 K. This leads to condition K \u2261 a \u2228 (\u03b3 \u2227 b), where\n\u03b3 = \u03b1 \u2228 \u03b2 \u2261 send(m).V Ci[k] \u2264 pred(receive(m)).V Cj[k] ,\ni.e., K \u2261 (send(m).V Ci[k] \u2264 pred(receive(m)).V Cj[k] \u2227\nsend(m).IPi[k] = 1) \u2228 send(m).V Ci[k] = 0.\nSo, Pi needs to approximate the predicate send(m).V Ci[k]\n\u2264 pred(receive(m)).V Cj[k]. To be correct, this\napproximation has to be a locally evaluable predicate ci(j, k) such that,\nwhen Pi is about to send a message m to Pj, ci(j, k) \u21d2\n(send(m).V Ci[k] \u2264 pred(receive(m)).V Cj[k]). Informally,\nthat means that, when ci(j, k) holds, the local context of\nPi allows to deduce that the receipt of m by Pj will not\nlead to V Cj[k] update (Pj knows as much as Pi about\nPk). Hence, the concrete condition K2 is the following:\nK2 \u2261 send(m).V Ci[k] = 0 \u2228 (ci(j, k) \u2227 send(m).IPi[k] = 1).\nLet us now examine the design of such a predicate\n(denoted ci). First, the case j = i can be ignored, since it is\nassumed (Section 2.1) that a process never sends a\nmessage to itself. Second, in the case j = k, the relation\nsend(m).V Ci[j] \u2264 pred(receive(m)).V Cj [j] is always true,\nbecause the receipt of m by Pj cannot update V Cj[j]. Thus,\n\u2200j = i : ci(j, j) must be true. Now, let us consider the case\nwhere j = i and j = k (Figure 2). Suppose that there exists\nan event e = receive(m ) with e < send(m), m sent by\nPj and piggybacking the triple (k, m .V C[k], m .IP[k]), and\nm .V C[k] \u2265 V Ci[k] (hence m .V C[k] = receive(m ).V Ci[k]).\nAs V Cj[k] cannot decrease this means that, as long as V Ci[k]\ndoes not increase, for every message m sent by Pi to Pj we\nhave the following: send(m).V Ci[k] = receive(m ).V Ci[k] =\nsend(m ).V Cj[k] \u2264 receive(m).V Cj [k], i.e., ci(j, k) must\nremain true. In other words, once ci(j, k) is true, the only\nevent of Pi that could reset it to false is either the receipt\nof a message that increases V Ci[k] or, if k = i, the\noccurrence of a relevant event (that increases V Ci[i]). Similarly,\nonce ci(j, k) is false, the only event that can set it to true is\nthe receipt of a message m from Pj, piggybacking the triple\n(k, m .V C[k], m .IP[k]) with m .V C[k] \u2265 V Ci[k].\nIn order to implement the local predicates ci(j, k), each\nprocess Pi is equipped with a boolean matrix Mi (as in [11])\nsuch that M[j, k] = 1 \u21d4 ci(j, k). It follows from the\nprevious discussion that this matrix is managed according to the\nfollowing rules (note that its i-th line is not significant (case\nj = i), and that its diagonal is always equal to 1):\nM0 Initialization: \u2200 (j, k) : Mi[j, k] is initialized to 1.\n215\nM1 Each time it produces a relevant event e: Pi resets4\nthe ith column of its matrix: \u2200j = i : Mi[j, i] := 0.\nM2 When Pi sends a message: no update of Mi occurs.\nM3 When it receives a message m from Pj , Pi executes the\nfollowing updates:\n\u2200 k \u2208 [1..n] : case\nV Ci[k] < m.V C[k] then \u2200 = i, j, k : Mi[ , k] := 0;\nMi[j, k] := 1\nV Ci[k] = m.V C[k] then Mi[j, k] := 1\nV Ci[k] > m.V C[k] then skip\nendcase\nThe following lemma results from rules M0-M3. The\ntheorem that follows shows that condition K2(m, k) is correct.\n(Both are proved in [1].)\nLemma 6. \u2200i, \u2200m sent by Pi to Pj, \u2200k, we have:\nsend(m).Mi[j, k] = 1 \u21d2\nsend(m).V Ci[k] \u2264 pred(receive(m)).V Cj [k].\nTheorem 3. Let m be a message sent by Pi to Pj . Let\nK2(m, k) \u2261 ((send(m).Mi[j, k] = 1) \u2227 (send(m).IPi[k] =\n1)\u2228(send(m).V Ci[k] = 0)). We have: K2(m, k) \u21d2 K(m, k).\n5.2 Resulting IPT Protocol\nThe complete text of the IPT protocol based on the\nprevious discussion follows.\nRM0 Initialization:\n- Both V Ci[1..n] and IPi[1..n] are set to [0, . . . , 0],\nand \u2200 (j, k) : Mi[j, k] is set to 1.\nRM1 Each time it produces a relevant event e:\n- Pi associates with e the timestamp e.TS defined\nas follows: e.TS = {(k, V Ci[k]) | IPi[k] = 1},\n- Pi increments its vector clock entry V Ci[i]\n(namely, it executes V Ci[i] := V Ci[i] + 1),\n- Pi resets IPi: \u2200 = i : IPi[ ] := 0; IPi[i] := 1.\n- Pi resets the ith column of its boolean matrix:\n\u2200j = i : Mi[j, i] := 0.\nRM2 When Pi sends a message m to Pj, it attaches to m the\nset of triples (each made up of a process id, an integer\nand a boolean): {(k, V Ci[k], IPi[k]) | (Mi[j, k] = 0 \u2228\nIPi[k] = 0) \u2227 (V Ci[k] > 0)}.\nRM3 When Pi receives a message m from Pj , it executes the\nfollowing updates:\n\u2200(k,m.V C[k], m.IP[k]) carried by m:\ncase\nV Ci[k] < m.V C[k] then V Ci[k] := m.V C[k];\nIPi[k] := m.IP[k];\n\u2200 = i, j, k : Mi[ , k] := 0;\n4\nActually, the value of this column remains constant after\nits first update. In fact, \u2200j, Mi[j, i] can be set to 1 only upon\nthe receipt of a message from Pj, carrying the value V Cj[i]\n(see R3). But, as Mj [i, i] = 1, Pj does not send V Cj[i] to\nPi. So, it is possible to improve the protocol by executing\nthis reset of the column Mi[\u2217, i] only when Pi produces\nits first relevant event.\nMi[j, k] := 1\nV Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]);\nMi[j, k] := 1\nV Ci[k] > m.V C[k] then skip\nendcase\n5.3 A Tradeoff\nThe condition K2(m, k) shows that a triple has not to be\ntransmitted when (Mi[j, k] = 1 \u2227 IPi[k] = 1) \u2228 (V Ci[k] >\n0). Let us first observe that the management of IPi[k]\nis governed by the application program. More precisely,\nthe IPT protocol does not define which are the\nrelevant events, it has only to guarantee a correct\nmanagement of IPi[k]. Differently, the matrix Mi does not belong\nto the problem specification, it is an auxiliary variable of\nthe IPT protocol, which manages it so as to satisfy the\nfollowing implication when Pi sends m to Pj : (Mi[j, k] =\n1) \u21d2 (pred(receive(m)).V Cj [k] \u2265 send(m).V Ci[k]). The\nfact that the management of Mi is governed by the protocol\nand not by the application program leaves open the\npossibility to design a protocol where more entries of Mi are equal\nto 1. This can make the condition K2(m, k) more often\nsatisfied5\nand can consequently allow the protocol to transmit\nless triples.\nWe show here that it is possible to transmit less triples\nat the price of transmitting a few additional boolean\nvectors. The previous IPT matrix-based protocol (Section 5.2)\nis modified in the following way. The rules RM2 and\nRM3 are replaced with the modified rules RM2\" and RM3\"\n(Mi[\u2217, k] denotes the kth column of Mi).\nRM2\" When Pi sends a message m to Pj, it attaches to m\nthe following set of 4-uples (each made up of a\nprocess id, an integer, a boolean and a boolean vector):\n{(k, V Ci[k], IPi[k], Mi[\u2217, k]) | (Mi[j, k] = 0 \u2228 IPi[k] =\n0) \u2227 V Ci[k] > 0}.\nRM3\" When Pi receives a message m from Pj , it executes the\nfollowing updates:\n\u2200(k,m.V C[k], m.IP[k], m.M[1..n, k]) carried by m:\ncase\nV Ci[k] < m.V C[k] then V Ci[k] := m.V C[k];\nIPi[k] := m.IP[k];\n\u2200 = i : Mi[ , k] := m.M[ , k]\nV Ci[k] = m.V C[k] then IPi[k] := min(IPi[k], m.IP[k]);\n\u2200 =i : Mi[ , k] :=\nmax(Mi[ , k], m.M[ , k])\nV Ci[k] > m.V C[k] then skip\nendcase\nSimilarly to the proofs described in [1], it is possible to\nprove that the previous protocol still satisfies the\nproperty proved in Lemma 6, namely, \u2200i, \u2200m sent by Pi to Pj,\n\u2200k we have (send(m).Mi[j, k] = 1) \u21d2 (send(m).V Ci[k] \u2264\npred(receive(m)).V Cj[k]).\n5\nLet us consider the previously described protocol (Section\n5.2) where the value of each matrix entry Mi[j, k] is always\nequal to 0. The reader can easily verify that this setting\ncorrectly implements the matrix. Moreover, K2(m, k) is then\nalways false: it actually coincides with K1(k, m) (which\ncorresponds to the case where whole vectors have to be\ntransmitted with each message).\n216\nIntuitively, the fact that some columns of matrices M are\nattached to application messages allows a transitive\ntransmission of information. More precisely, the relevant history\nof Pk known by Pj is transmitted to a process Pi via a causal\nsequence of messages from Pj to Pi. In contrast, the\nprotocol described in Section 5.2 used only a direct transmission of\nthis information. In fact, as explained Section 5.1, the\npredicate c (locally implemented by the matrix M) was based on\nthe existence of a message m sent by Pj to Pi, piggybacking\nthe triple (k, m .V C[k], m .IP[k]), and m .V C[k] \u2265 V Ci[k],\ni.e., on the existence of a direct transmission of information\n(by the message m ).\nThe resulting IPT protocol (defined by the rules RM0,\nRM1, RM2\" and RM3\") uses the same condition K2(m, k)\nas the previous one. It shows an interesting tradeoff between\nthe number of triples (k, V Ci[k], IPi[k]) whose transmission\nis saved and the number of boolean vectors that have to\nbe additionally piggybacked. It is interesting to notice that\nthe size of this additional information is bounded while each\ntriple includes a non-bounded integer (namely a vector clock\nvalue).\n6. EXPERIMENTAL STUDY\nThis section compares the behaviors of the previous\nprotocols. This comparison is done with a simulation study.\nIPT1 denotes the protocol presented in Section 3.3 that\nuses the condition K1(m, k) (which is always equal to false).\nIPT2 denotes the protocol presented in Section 5.2 that uses\nthe condition K2(m, k) where messages carry triples.\nFinally, IPT3 denotes the protocol presented in Section 5.3 that\nalso uses the condition K2(m, k) but where messages carry\nadditional boolean vectors.\nThis section does not aim to provide an in-depth\nsimulation study of the protocols, but rather presents a general\nview on the protocol behaviors. To this end, it compares\nIPT2 and IPT3 with regard to IPT1. More precisely, for\nIPT2 the aim was to evaluate the gain in terms of triples\n(k, V Ci[k], IPi[k]) not transmitted with respect to the\nsystematic transmission of whole vectors as done in IPT1. For\nIPT3, the aim was to evaluate the tradeoff between the\nadditional boolean vectors transmitted and the number of saved\ntriples. The behavior of each protocol was analyzed on a set\nof programs.\n6.1 Simulation Parameters\nThe simulator provides different parameters enabling to\ntune both the communication and the processes features.\nThese parameters allow to set the number of processes for\nthe simulated computation, to vary the rate of\ncommunication (send/receive) events, and to alter the time duration\nbetween two consecutive relevant events. Moreover, to be\nindependent of a particular topology of the underlying\nnetwork, a fully connected network is assumed. Internal events\nhave not been considered.\nSince the presence of the triples (k, V Ci[k], IPi[k])\npiggybacked by a message strongly depends on the frequency at\nwhich relevant events are produced by a process, different\ntime distributions between two consecutive relevant events\nhave been implemented (e.g., normal, uniform, and Poisson\ndistributions). The senders of messages are chosen\naccording to a random law. To exhibit particular configurations\nof a distributed computation a given scenario can be\nprovided to the simulator. Message transmission delays follow\na standard normal distribution. Finally, the last parameter\nof the simulator is the number of send events that occurred\nduring a simulation.\n6.2 Parameter Settings\nTo compare the behavior of the three IPT protocols, we\nperformed a large number of simulations using different\nparameters setting. We set to 10 the number of processes\nparticipating to a distributed computation. The number of\ncommunication events during the simulation has been set to\n10 000. The parameter \u03bb of the Poisson time distribution (\u03bb\nis the average number of relevant events in a given time\ninterval) has been set so that the relevant events are generated\nat the beginning of the simulation. With the uniform time\ndistribution, a relevant event is generated (in the average)\nevery 10 communication events. The location parameter of\nthe standard normal time distribution has been set so that\nthe occurrence of relevant events is shifted around the third\npart of the simulation experiment.\nAs noted previously, the simulator can be fed with a\ngiven scenario. This allows to analyze the worst case scenarios\nfor IPT2 and IPT3. These scenarios correspond to the case\nwhere the relevant events are generated at the maximal\nfrequency (i.e., each time a process sends or receives a message,\nit produces a relevant event).\nFinally, the three IPT protocols are analyzed with the\nsame simulation parameters.\n6.3 Simulation Results\nThe results are displayed on the Figures 3.a-3.d. These\nfigures plot the gain of the protocols in terms of the number\nof triples that are not transmitted (y axis) with respect to\nthe number of communication events (x axis). From these\nfigures, we observe that, whatever the time distribution\nfollowed by the relevant events, both IPT2 and IPT3 exhibit\na behavior better than IPT1 (i.e., the total number of\npiggybacked triples is lower in IPT2 and IPT3 than in IPT1),\neven in the worst case (see Figure 3.d).\nLet us consider the worst scenario. In that case, the gain\nis obtained at the very beginning of the simulation and lasts\nas long as it exists a process Pj for which \u2200k : V Cj[k] = 0.\nIn that case, the condition \u2200k : K(m, k) is satisfied. As soon\nas \u2203k : V Cj[k] = 0, both IPT2 and IPT3 behave as IPT1\n(the shape of the curve becomes flat) since the condition\nK(m, k) is no longer satisfied.\nFigure 3.a shows that during the first events of the\nsimulation, the slope of curves IPT2 and IPT3 are steep. The\nsame occurs in Figure 3.d (that depicts the worst case\nscenario). Then the slope of these curves decreases and remains\nconstant until the end of the simulation. In fact, as soon as\nV Cj[k] becomes greater than 0, the condition \u00acK(m, k)\nreduces to (Mi[j, k] = 0 \u2228 IPi[k] = 0).\nFigure 3.b displays an interesting feature. It considers \u03bb =\n100. As the relevant events are taken only during the very\nbeginning of the simulation, this figure exhibits a very steep\nslope as the other figures. The figure shows that, as soon as\nno more relevant events are taken, on average, 45% of the\ntriples are not piggybacked by the messages. This shows\nthe importance of matrix Mi. Furthermore, IPT3 benefits\nfrom transmitting additional boolean vectors to save triple\ntransmissions. The Figures 3.a-3.c show that the average\ngain of IPT3 with respect to IPT2 is close to 10%.\nFinally, Figure 3.c underlines even more the importance\n217\nof matrix Mi. When very few relevant events are taken,\nIPT2 and IPT3 turn out to be very efficient. Indeed, this\nfigure shows that, very quickly, the gain in number of triples\nthat are saved is very high (actually, 92% of the triples are\nsaved).\n6.4 Lessons Learned from the Simulation\nOf course, all simulation results are consistent with the\ntheoretical results. IPT3 is always better than or equal to\nIPT2, and IPT2 is always better than IPT1. The simulation\nresults teach us more:\n\u2022 The first lesson we have learnt concerns the matrix Mi.\nIts use is quite significant but mainly depends on the time\ndistribution followed by the relevant events. On the one\nhand, when observing Figure 3.b where a large number of\nrelevant events are taken in a very short time, IPT2 can save\nup to 45% of the triples. However, we could have\nexpected a more sensitive gain of IPT2 since the boolean vector\nIP tends to stabilize to [1, ..., 1] when no relevant events\nare taken. In fact, as discussed in Section 5.3, the\nmanagement of matrix Mi within IPT2 does not allow a transitive\ntransmission of information but only a direct transmission\nof this information. This explains why some columns of Mi\nmay remain equal to 0 while they could potentially be equal\nto 1. Differently, as IPT3 benefits from transmitting\nadditional boolean vectors (providing a transitive transmission\ninformation) it reaches a gain of 50%.\nOn the other hand, when very few relevant events are\ntaken in a large period of time (see Figure 3.c), the behavior of\nIPT2 and IPT3 turns out to be very efficient since the\ntransmission of up to 92% of the triples is saved. This comes from\nthe fact that very quickly the boolean vector IPi tends to\nstabilize to [1, ..., 1] and that matrix Mi contains very few\n0 since very few relevant events have been taken. Thus, a\ndirect transmission of the information is sufficient to quickly\nget matrices Mi equal to [1, ..., 1], . . . , [1, ..., 1].\n\u2022 The second lesson concerns IPT3, more precisely, the\ntradeoff between the additional piggybacking of boolean\nvectors and the number of triples whose transmission is saved.\nWith n = 10, adding 10 booleans to a triple does not\nsubstantially increases its size. The Figures 3.a-3.c exhibit the\nnumber of triples whose transmission is saved: the average\ngain (in number of triples) of IPT3 with respect to IPT2 is\nabout 10%.\n7. CONCLUSION\nThis paper has addressed an important causality-related\ndistributed computing problem, namely, the Immediate\nPredecessors Tracking problem. It has presented a family of\nprotocols that provide each relevant event with a timestamp\nthat exactly identify its immediate predecessors. The\nfamily is defined by a general condition that allows application\nmessages to piggyback control information whose size can\nbe smaller than n (the number of processes). In that sense,\nthis family defines message size-efficient IPT protocols.\nAccording to the way the general condition is implemented,\ndifferent IPT protocols can be obtained. Three of them have\nbeen described and analyzed with simulation experiments.\nInterestingly, it has also been shown that the efficiency of\nthe protocols (measured in terms of the size of the control\ninformation that is not piggybacked by an application\nmessage) depends on the pattern defined by the communication\nevents and the relevant events.\nLast but not least, it is interesting to note that if one is not\ninterested in tracking the immediate predecessor events, the\nprotocols presented in the paper can be simplified by\nsuppressing the IPi booleans vectors (but keeping the boolean\nmatrices Mi). The resulting protocols, that implement a\nvector clock system, are particularly efficient as far as the\nsize of the timestamp carried by each message is concerned.\nInterestingly, this efficiency is not obtained at the price of\nadditional assumptions (such as fifo channels).\n8. REFERENCES\n[1] Anceaume E., H\u00b4elary J.-M. and Raynal M., Tracking\nImmediate Predecessors in Distributed Computations. Res.\nReport #1344, IRISA, Univ. Rennes (France), 2001.\n[2] Baldoni R., Prakash R., Raynal M. and Singhal M.,\nEfficient \u2206-Causal Broadcasting. Journal of Computer\nSystems Science and Engineering, 13(5):263-270, 1998.\n[3] Chandy K.M. and Lamport L., Distributed Snapshots:\nDetermining Global States of Distributed Systems, ACM\nTransactions on Computer Systems, 3(1):63-75, 1985.\n[4] Diehl C., Jard C. and Rampon J.-X., Reachability Analysis\nof Distributed Executions, Proc. TAPSOFT\"93,\nSpringer-Verlag LNCS 668, pp. 629-643, 1993.\n[5] Fidge C.J., Timestamps in Message-Passing Systems that\nPreserve Partial Ordering, Proc. 11th Australian\nComputing Conference, pp. 56-66, 1988.\n[6] Fromentin E., Jard C., Jourdan G.-V. and Raynal M.,\nOn-the-fly Analysis of Distributed Computations, IPL,\n54:267-274, 1995.\n[7] Fromentin E. and Raynal M., Shared Global States in\nDistributed Computations, JCSS, 55(3):522-528, 1997.\n[8] Fromentin E., Raynal M., Garg V.K. and Tomlinson A.,\nOn-the-Fly Testing of Regular Patterns in Distributed\nComputations. Proc. ICPP\"94, Vol. 2:73-76, 1994.\n[9] Garg V.K., Principles of Distributed Systems, Kluwer\nAcademic Press, 274 pages, 1996.\n[10] H\u00b4elary J.-M., Most\u00b4efaoui A., Netzer R.H.B. and Raynal\nM., Communication-Based Prevention of Useless\nCkeckpoints in Distributed Computations. Distributed\nComputing, 13(1):29-43, 2000.\n[11] H\u00b4elary J.-M., Melideo G. and Raynal M., Tracking\nCausality in Distributed Systems: a Suite of Efficient\nProtocols. Proc. SIROCCO\"00, Carleton University Press,\npp. 181-195, L\"Aquila (Italy), June 2000.\n[12] H\u00b4elary J.-M., Netzer R. and Raynal M., Consistency Issues\nin Distributed Checkpoints. IEEE TSE,\n25(4):274-281, 1999.\n[13] Hurfin M., Mizuno M., Raynal M. and Singhal M., Efficient\nDistributed Detection of Conjunction of Local Predicates\nin Asynch Computations. IEEE TSE, 24(8):664-677, 1998.\n[14] Lamport L., Time, Clocks and the Ordering of Events in a\nDistributed System. Comm. ACM, 21(7):558-565, 1978.\n[15] Marzullo K. and Sabel L., Efficient Detection of a Class of\nStable Properties. Distributed Computing, 8(2):81-91, 1994.\n[16] Mattern F., Virtual Time and Global States of Distributed\nSystems. Proc. Int. Conf. Parallel and Distributed\nAlgorithms, (Cosnard, Quinton, Raynal, Robert Eds),\nNorth-Holland, pp. 215-226, 1988.\n[17] Prakash R., Raynal M. and Singhal M., An Adaptive\nCausal Ordering Algorithm Suited to Mobile Computing\nEnvironment. JPDC, 41:190-204, 1997.\n[18] Raynal M. and Singhal S., Logical Time: Capturing\nCausality in Distributed Systems. IEEE Computer,\n29(2):49-57, 1996.\n[19] Singhal M. and Kshemkalyani A., An Efficient\nImplementation of Vector Clocks. IPL, 43:47-52, 1992.\n[20] Wang Y.M., Consistent Global Checkpoints That Contain\na Given Set of Local Checkpoints. IEEE TOC,\n46(4):456-468, 1997.\n218\n0\n1000\n2000\n3000\n4000\n5000\n6000\n0 2000 4000 6000 8000 10000\ngaininnumberoftriples\ncommunication events number\nIPT1\nIPT2\nIPT3\nrelevant events\n(a) The relevant events follow a uniform distribution\n(ratio=1/10)\n-5000\n0\n5000\n10000\n15000\n20000\n25000\n30000\n35000\n40000\n45000\n50000\n0 2000 4000 6000 8000 10000\ngaininnumberoftriples\ncommunication events number\nIPT1\nIPT2\nIPT3\nrelevant events\n(b) The relevant events follow a Poisson distribution\n(\u03bb = 100)\n0\n10000\n20000\n30000\n40000\n50000\n60000\n70000\n80000\n90000\n100000\n0 2000 4000 6000 8000 10000\ngaininnumberoftriples\ncommunication events number\nIPT1\nIPT2\nIPT3\nrelevant events\n(c) The relevant events follow a normal distribution\n0\n50\n100\n150\n200\n250\n300\n350\n400\n450\n1 10 100 1000 10000\ngaininnumberoftriples\ncommunication events number\nIPT1\nIPT2\nIPT3\nrelevant events\n(d) For each pi, pi takes a relevant event and\nbroadcast to all processes\nFigure 3: Experimental Results\n219", "keywords": "immediate predecessor tracking;relevant event;causality track;transitive reduction;ipt protocol;timestamp;message-pass;hasse diagram;piggybacking;immediate predecessor;tracking causality;common global memory;message transfer delay;checkpointing problem;control information;vector timestamp;distributed computation;vector clock;channel ordering property"} {"name": "train_C-78", "title": "An Architectural Framework and a Middleware for Cooperating Smart Components", "abstract": "In a future networked physical world, a myriad of smart sensors and actuators assess and control aspects of their environments and autonomously act in response to it. Examples range in telematics, traffic management, team robotics or home automation to name a few. To a large extent, such systems operate proactively and independently of direct human control driven by the perception of the environment and the ability to organize respective computations dynamically. The challenging characteristics of these applications include sentience and autonomy of components, issues of responsiveness and safety criticality, geographical dispersion, mobility and evolution. A crucial design decision is the choice of the appropriate abstractions and interaction mechanisms. Looking to the basic building blocks of such systems we may find components which comprise mechanical components, hardware and software and a network interface, thus these components have different characteristics compared to pure software components. They are able to spontaneously disseminate information in response to events observed in the physical environment or to events received from other component via the network interface. Larger autonomous components may be composed recursively from these building blocks. The paper describes an architectural framework and a middleware supporting a component-based system and an integrated view on events-based communication comprising the real world events and the events generated in the system. It starts by an outline of the component-based system construction. The generic event architecture GEAR is introduced which describes the event-based interaction between the components via a generic event layer. The generic event layer hides the different communication channels including the interactions through the environment. An appropriate middleware is presented which reflects these needs and allows to specify events which have quality attributes to express temporal constraints. This is complemented by the notion of event channels which are abstractions of the underlying network and allow to enforce quality attributes. They are established prior to interaction to reserve the needed computational and network resources for highly predictable event dissemination.", "fulltext": "1. INTRODUCTION\nIn recent years we have seen the continuous improvement\nof technologies that are relevant for the construction of\ndistributed embedded systems, including trustworthy visual,\nauditory, and location sensing [11], communication and\nprocessing. We believe that in a future networked physical\nworld a new class of applications will emerge, composed of\na myriad of smart sensors and actuators to assess and\ncontrol aspects of their environments and autonomously act in\nresponse to it. The anticipated challenging characteristics\nof these applications include autonomy, responsiveness and\nsafety criticality, large scale, geographical dispersion,\nmobility and evolution.\nIn order to deal with these challenges, it is of\nfundamental importance to use adequate high-level models,\nabstractions and interaction paradigms. Unfortunately, when\nfacing the specific characteristics of the target systems, the\nshortcomings of current architectures and middleware\ninteraction paradigms become apparent. Looking to the basic\nbuilding blocks of such systems we may find components\nwhich comprise mechanical parts, hardware, software and\na network interface. However, classical event/object\nmodels are usually software oriented and, as such, when\ntrans28\nported to a real-time, embedded systems setting, their\nharmony is cluttered by the conflict between, on the one side,\nsend/receive of software events (message-based), and on\nthe other side, input/output of hardware or real-world\nevents, register-based. In terms of interaction paradigms,\nand although the use of event-based models appears to be\na convenient solution [10, 22], these often lack the\nappropriate support for non-functional requirements like reliability,\ntimeliness or security.\nThis paper describes an architectural framework and a\nmiddleware, supporting a component-based system and an\nintegrated view on event-based communication comprising\nthe real world events and the events generated in the system.\nWhen choosing the appropriate interaction paradigm it\nis of fundamental importance to address the challenging\nissues of the envisaged sentient applications. Unlike classical\napproaches that confine the possible interactions to the\napplication boundaries, i.e. to its components, we consider\nthat the environment surrounding the application also plays\na relevant role in this respect. Therefore, the paper starts by\nclarifying several issues concerning our view of the system,\nabout the interactions that may take place and about the\ninformation flows. This view is complemented by\nproviding an outline of the component-based system construction\nand, in particular, by showing that it is possible to\ncompose larger applications from basic components, following\nan hierarchical composition approach.\nThis provides the necessary background to introduce the\nGeneric-Events Architecture (GEAR), which describes\nthe event-based interaction between the components via a\ngeneric event layer while allowing the seamless integration\nof physical and computer information flows. In fact, the\ngeneric event layer hides the different communication\nchannels, including the interactions through the environment.\nAdditionally, the event layer abstraction is also adequate\nfor the proper handling of the non-functional requirements,\nnamely reliability and timeliness, which are particularly\nstringent in real-time settings. The paper devotes particular\nattention to this issue by discussing the temporal aspects of\ninteractions and the needs for predictability.\nAn appropriate middleware is presented which reflects\nthese needs and allows to specify events which have quality\nattributes to express temporal constraints. This is\ncomplemented by the notion of Event Channels (EC), which are\nabstractions of the underlying network while being abstracted\nby the event layer. In fact, event channels play a\nfundamental role in securing the functional and non-functional\n(e.g. reliability and timeliness) properties of the envisaged\napplications, that is, in allowing the enforcement of quality\nattributes. They are established prior to interaction to\nreserve the needed computational and network resources for\nhighly predictable event dissemination.\nThe paper is organized as follows. In Section 3 we\nintroduce the fundamental notions and abstractions that we\nadopt in this work to describe the interactions taking place\nin the system. Then, in Section 4, we describe the\ncomponentbased approach that allows composition of objects. GEAR\nis then described in Section 5 and Section 6 focuses on\ntemporal aspects of the interactions. Section 7 describes the\nCOSMIC middleware, which may be used to specify the\ninteraction between sentient objects. A simple example to\nhighlight the ideas presented in the paper appears in\nSection 8 and Section 9 concludes the paper.\n2. RELATED WORK\nOur work considers a wired physical world in which a\nvery large number of autonomous components cooperate.\nIt is inspired by many research efforts in very different\nareas. Event-based systems in general have been introduced to\nmeet the requirements of applications in which entities\nspontaneously generate information and disseminate it [1, 25,\n22]. Intended for large systems and requiring quite complex\ninfrastructures, these event systems do not consider\nstringent quality aspects like timeliness and dependability issues.\nSecondly, they are not created to support inter-operability\nbetween tiny smart devices with substantial resource\nconstraints.\nIn [10] a real-time event system for CORBA has been\nintroduced. The events are routed via a central event server\nwhich provides scheduling functions to support the real-time\nrequirements. Such a central component is not available\nin an infrastructure envisaged in our system architecture\nand the developed middleware TAO (The Ace Orb) is quite\ncomplex and unsuitable to be directly integrated in smart\ndevices.\nThere are efforts to implement CORBA for control\nnetworks, tailored to connect sensor and actuator components [15,\n19]. They are targeted for the CAN-Bus [9], a popular\nnetwork developed for the automotive industry. However, in\nthese approaches the support for timeliness or\ndependability issues does not exist or is only very limited.\nA new scheme to integrate smart devices in a CORBA\nenvironment is proposed in [17] and has lead to the proposal of\na standard by the Object Management Group (OMG) [26].\nSmart transducers are organized in clusters that are\nconnected to a CORBA system by a gateway.\nThe clusters form isolated subnetworks. A special master\nnode enforces the temporal properties in the cluster subnet.\nA CORBA gateway allows to access sensor data and write\nactuator data by means of an interface file system (IFS).\nThe basic structure is similar to the WAN-of-CANs\nstructure which has been introduced in the CORTEX project [4].\nIslands of tight control may be realized by a control network\nand cooperate via wired or wireless networks covering a large\nnumber of these subnetworks. However, in contrast to the\nevent channel model introduced in this paper, all\ncommunication inside a cluster relies on a single technical solution of\na synchronous communication channel. Secondly, although\nthe temporal behaviour of a single cluster is rigorously\ndefined, no model to specify temporal properties for\nclusterto-CORBA or cluster-to-cluster interactions is provided.\n3. INFORMATION FLOW AND\nINTERACTION MODEL\nIn this paper we consider a component-based system model\nthat incorporates previous work developed in the context of\nthe IST CORTEX project [5]. As mentioned above, a\nfundamental idea underlying the approach is that applications can\nbe composed of a large number of smart components that\nare able to sense their surrounding environment and\ninteract with it. These components are referred to as sentient\nobjects, a metaphor elaborated in CORTEX and inspired\non the generic concept of sentient computing introduced in\n[12]. Sentient objects accept input events from a variety of\ndifferent sources (including sensors, but not constrained to\nthat), process them, and produce output events, whereby\n29\nthey actuate on the environment and/or interact with other\nobjects. Therefore, the following kinds of interactions can\ntake place in the system:\nEnvironment-to-object interactions: correspond to a\nflow of information from the environment to\napplication objects, reporting about the state of the former,\nand/or notifying about events taking place therein.\nObject-to-object interactions: correspond to a flow of\ninformation among sentient objects, serving two\npurposes. The first is related with complementing the\nassessment of each individual object about the state\nof the surrounding space. The second is related to\ncollaboration, in which the object tries to influence other\nobjects into contributing to a common goal, or into\nreacting to an unexpected situation.\nObject-to-environment interactions: correspond to a\nflow of information from an object to the environment,\nwith the purpose of forcing a change in the state of the\nlatter.\nBefore continuing, we need to clarify a few issues with\nrespect to these possible forms of interaction. We consider\nthat the environment can be a producer or consumer of\ninformation while interacting with sentient objects. The\nenvironment is the real (physical) world surrounding an\nobject, not necessarily close to the object or limited to certain\nboundaries. Quite clearly, the information produced by the\nenvironment corresponds to the physical representation of\nreal-time entities, of which typical examples include\ntemperature, distance or the state of a door. On the other hand,\nactuation on the environment implies the manipulation of\nthese real-time entities, like increasing the temperature\n(applying more heat), changing the distance (applying some\nmovement) or changing the state of the door (closing or\nopening it). The required transformations between system\nrepresentations of these real-time entities and their physical\nrepresentations is accomplished, generically, by sensors and\nactuators. We further consider that there may exist dumb\nsensors and actuators, which interact with the objects by\ndisseminating or capturing raw transducer information, and\nsmart sensors and actuators, with enhanced processing\ncapabilities, capable of speaking some more elaborate event\ndialect (see Sections 5 and 6.1). Interaction with the\nenvironment is therefore done through sensors and actuators,\nwhich may, or may not be part of sentient objects, as\ndiscussed in Section 4.2.\nState or state changes in the environment are considered\nas events, captured by sensors (in the environment or within\nsentient objects) and further disseminated to other\npotentially interested sentient objects in the system. In\nconsequence, it is quite natural to base the communication and\ninteraction among sentient objects and with the environment\non an event-based communication model. Moreover, typical\nproperties of event-based models, such as anonymous and\nnon-blocking communication, are highly desirable in systems\nwhere sentient objects can be mobile and where interactions\nare naturally very dynamic.\nA distinguishing aspect of our work from many of the\nexisting approaches, is that we consider that sentient objects\nmay indirectly communicate with each other through the\nenvironment, when they act on it. Thus the environment\nconstitutes an interaction and communication channel and\nis in the control and awareness loop of the objects. In other\nwords, when a sentient object actuates on the environment it\nwill be able to observe the state changes in the environment\nby means of events captured by the sensors. Clearly, other\nobjects might as well capture the same events, thus\nestablishing the above-mentioned indirect communication path.\nIn systems that involve interactions with the environment\nit is very important to consider the possibility of\ncommunication through the environment. It has been shown that\nthe hidden channels developing through the latter (e.g.,\nfeedback loops) may hinder software-based algorithms ignoring\nthem [30]. Therefore, any solution to the problem requires\nthe definition of convenient abstractions and appropriate\narchitectural constructs.\nOn the other hand, in order to deal with the information\nflow through the whole computer system and environment in\na seamless way, handling software and hardware events\nuniformly, it is also necessary to find adequate abstractions.\nAs discussed in Section 5, the Generic-Events Architecture\nintroduces the concept of Generic Event and an Event Layer\nabstraction which aim at dealing, among others, with these\nissues.\n4. SENTIENT OBJECT COMPOSITION\nIn this section we analyze the most relevant issues related\nwith the sentient object paradigm and the construction of\nsystems composed of sentient objects.\n4.1 Component-based System Construction\nSentient objects can take several different forms: they\ncan simply be software-based components, but they can also\ncomprise mechanical and/or hardware parts, amongst which\nthe very sensorial apparatus that substantiates sentience,\nmixed with software components to accomplish their task.\nWe refine this notion by considering a sentient object as an\nencapsulating entity, a component with internal logic and\nactive processing elements, able to receive, transform and\nproduce new events. This interface hides the internal\nhardware/software structure of the object, which may be\ncomplex, and shields the system from the low-level functional\nand temporal details of controlling a specific sensor or\nactuator.\nFurthermore, given the inherent complexity of the\nenvisaged applications, the number of simultaneous input events\nand the internal size of sentient objects may become too\nlarge and difficult to handle. Therefore, it should be\npossible to consider the hierarchical composition of sentient\nobjects so that the application logic can be separated across as\nfew or as many of these objects as necessary. On the other\nhand, composition of sentient objects should normally be\nconstrained by the actual hardware component\"s structure,\npreventing the possibility of arbitrarily composing sentient\nobjects. This is illustrated in Figure 1, where a sentient\nobject is internally composed of a few other sentient\nobjects, each of them consuming and producing events, some\nof which only internally propagated.\nObserving the figure, and recalling our previous discussion\nabout the possible interactions, we identify all of them here:\nan object-to-environment interaction occurs between the\nobject controlling a WLAN transmitter and some WLAN\nreceiver in the environment; an environment-to-object\ninteraction takes place when the object responsible for the GPS\n30\nG P S\nr e c e p t i o n\nW i r e l e s s\nt r a n s m i s s i o n\nD o p p l e r\nr a d a r\nP h y s i c a l f e e d b a c k\nO b j e c t ' s b o d y\nI n t e r n a l N e t w o r k\nFigure 1: Component-aware sentient object\ncomposition.\nsignal reception uses the information transmitted by the\nsatellites; finally, explicit object-to-object interactions occur\ninternally to the container object, through an internal\ncommunication network. Additionally, it is interesting to\nobserve that implicit communication can also occur, whether\nthe physical feedback develops through the environment\ninternal to the container object (as depicted) or through the\nenvironment external to this object. However, there is a\nsubtle difference between both cases. While in the former the\nfeedback can only be perceived by objects internal to the\ncontainer, bounding the extent to which consistency must\nbe ensured, such bounds do not exist in the latter. In fact,\nthe notion of sentient object as an encapsulating entity may\nserve other purposes (e.g., the confinement of feedback and\nof the propagation of events), beyond the mere hierarchical\ncomposition of objects.\nTo give a more concrete example of such component-aware\nobject composition we consider a scenario of cooperating\nrobots. Each robot is made of several components,\ncorresponding, for instance, to axis and manipulator controllers.\nTogether with the control software, each of these controllers\nmay be a sentient object. On the other hand, a robot itself\nis a sentient object, composed of the objects materialized\nby the controllers, and the environment internal to its own\nstructure, or body.\nThis means that it should be possible to define\ncooperation activities using the events produced by robot sentient\nobjects, without the need to know the internal structure of\nrobots, or the events produced by body objects or by smart\nsensors within the body. From an engineering point of view,\nhowever, this also means that robot sentient object may\nhave to generate new events that reflect its internal state,\nwhich requires the definition of a gateway to make the bridge\nbetween the internal and external environments.\n4.2 Encapsulation and Scoping\nNow an important question is about how to represent and\ndisseminate events in a large scale networked world. As we\nhave seen above, any event generated by a sentient object\ncould, in principle, be visible anywhere in the system and\nthus received by any other sentient object. However, there\nare substantial obstacles to such universal interactions,\noriginating from the components heterogeneity in such a\nlargescale setting.\nFirstly, the components may have severe performance\nconstraints, particularly because we want to integrate smart\nsensors and actuators in such an architecture. Secondly, the\nbandwidth of the participating networks may vary largely.\nSuch networks may be low power, low bandwidth fieldbuses,\nor more powerful wireless networks as well as high speed\nbackbones. Thirdly, the networks may have widely different\nreliability and timeliness characteristics. Consider a\nplatoon of cooperating vehicles. Inside a vehicle there may be\na field-bus like CAN [8, 9], TTP/A [17] or LIN [20], with a\ncomparatively low bandwidth. On the other hand, the\nvehicles are communicating with others in the platoon via a\ndirect wireless link. Finally, there may be multiple platoons\nof vehicles which are coordinated by an additional wireless\nnetwork layer.\nAt the abstraction level of sentient objects, such\nheterogeneity is reflected by the notion of body-vs-environment.\nAt the network level, we assume the WAN-of-CANs\nstructure [27] to model the different networks. The notion of\nbody and environment is derived from the recursively\ndefined component-based object model. A body is similar to\na cell membrane and represents a quality of service\ncontainer for the sentient objects inside. On the network level,\nit may be associated with the components coupled by a\ncertain CAN. A CAN defines the dissemination quality which\ncan be expected by the cooperating objects.\nIn the above example, a vehicle may be a sentient object,\nwhose body is composed of the respective lower level objects\n(sensors and actuators) which are connected by the internal\nnetwork (see Figure 1). Correspondingly, the platoon can be\nseen itself as an object composed of a collection of\ncooperating vehicles, its body being the environment encapsulated by\nthe platoon zone. At the network level, the wireless network\nrepresents the respective CAN. However, several platoons\nunited by their CANs may interact with each other and\nobjects further away, through some wider-range, possible fixed\nnetworking substrate, hence the concept of WAN-of-CANs.\nThe notions of body-environment and WAN-of-CANs are\nvery useful when defining interaction properties across such\nboundaries. Their introduction obeyed to our belief that\na single mechanism to provide quality measures for\ninteractions is not appropriate. Instead, a high level construct\nfor interaction across boundaries is needed which allows to\nspecify the quality of dissemination and exploits the\nknowledge about body and environment to assess the feasibility of\nquality constraints. As we will see in the following section,\nthe notion of an event channel represents this construct in\nour architecture. It disseminates events and allows the\nnetwork independent specification of quality attributes. These\nattributes must be mapped to the respective properties of\nthe underlying network structure.\n5. A GENERIC EVENTS ARCHITECTURE\nIn order to successfully apply event-based object-oriented\nmodels, addressing the challenges enumerated in the\nintroduction of this paper, it is necessary to use adequate\narchitectural constructs, which allow the enforcement of\nfundamental properties such as timeliness or reliability.\nWe propose the Generic-Events Architecture (GEAR),\ndepicted in Figure 2, which we briefly describe in what\nfollows (for a more detailed description please refer to [29]).\nThe L-shaped structure is crucial to ensure some of the\nproperties described.\nEnvironment: The physical surroundings, remote and close,\nsolid and etherial, of sentient objects.\n31\nC o m m ' sC o m m ' sC o m m ' s\nT r a n s l a t i o n\nL a y e r\nT r a n s l a t i o n\nL a y e r\nB o d y\nE n v i r o n m e n t\nB o d y\nE n v i r o n m e n t\nB o d y\nE n v i r o n m e n t\n( i n c l u d i n g o p e r a t i o n a l n e t w o r k )\n( o f o b j e c t o r o b j e c t c o m p o u n d )\nT r a n s l a t i o n\nL a y e r\nT r a n s l a t i o n\nS e n t i e n t\nO b j e c t\nS e n t i e n t\nO b j e c t\nS e n t i e n t\nO b j e c t\nR e g u l a r N e t w o r k\nc o n s u m ep r o d u c e\nE v e n t\nL a y e r\nE v e n t\nL a y e r\nE v e n t\nL a y e r\nS e n t i e n t\nO b j e c t\nFigure 2: Generic-Events architecture.\nBody: The physical embodiment of a sentient object (e.g.,\nthe hardware where a mechatronic controller resides,\nthe physical structure of a car). Note that due to the\ncompositional approach taken in our model, part of\nwhat is environment to a smaller object seen\nindividually, becomes body for a larger, containing object.\nIn fact, the body is the internal environment of the\nobject. This architecture layering allows composition\nto take place seamlessly, in what concerns information\nflow.\nInside a body there may also be implicit knowledge,\nwhich can be exploited to make interaction more\nefficient, like the knowledge about the number of\ncooperating entities, the existence of a specific\ncommunication network or the simple fact that all components are\nco-located and thus the respective events do not need\nto specify location in their context attributes. Such\nintrinsic information is not available outside a body and,\ntherefore, more explicit information has to be carried\nby an event.\nTranslation Layer: The layer responsible for physical event\ntransformation from/to their native form to event\nchannel dialect, between environment/body and an event\nchannel. Essentially one doing observation and\nactuation operations on the lower side, and doing\ntransactions of event descriptions on the other. On the lower\nside this layer may also interact with dumb sensors or\nactuators, therefore talking the language of the\nspecific device. These interactions are done through\noperational networks (hence the antenna symbol in the\nfigure).\nEvent Layer: The layer responsible for event propagation\nin the whole system, through several Event Channels\n(EC):. In concrete terms, this layer is a kind of\nmiddleware that provides important event-processing services\nwhich are crucial for any realistic event-based system.\nFor example, some of the services that imply the\nprocessing of events may include publishing, subscribing,\ndiscrimination (zoning, filtering, fusion, tracing), and\nqueuing.\nCommunication Layer: The layer responsible for\nwrapping events (as a matter of fact, event descriptions\nin EC dialect) into carrier event-messages, to be\ntransported to remote places. For example, a\nsensing event generated by a smart sensor is wrapped in\nan event-message and disseminated, to be caught by\nwhoever is concerned. The same holds for an\nactuation event produced by a sentient object, to be\ndelivered to a remote smart actuator. Likewise, this may\napply to an event-message from one sentient object\nto another. Dumb sensors and actuators do not send\nevent-messages, since they are unable to understand\nthe EC dialect (they do not have an event layer\nneither a communication layer- they communicate, if\nneeded, through operational networks).\nRegular Network: This is represented in the horizontal\naxis of the block diagram by the communication layer,\nwhich encompasses the usual LAN, TCP/IP, and\nrealtime protocols, desirably augmented with reliable and/or\nordered broadcast and other protocols.\nThe GEAR introduces some innovative ideas in distributed\nsystems architecture. While serving an object model based\non production and consumption of generic events, it treats\nevents produced by several sources (environment, body,\nobjects) in a homogeneous way. This is possible due to the use\nof a common basic dialect for talking about events and due\nto the existence of the translation layer, which performs the\nnecessary translation between the physical representation of\na real-time entity and the EC compliant format. Crucial to\nthe architecture is the event layer, which uses event channels\nto propagate events through regular network infrastructures.\nThe event layer is realized by the COSMIC middleware, as\ndescribed in Section 7.\n5.1 Information Flow in GEAR\nThe flow of information (external environment and\ncomputational part) is seamlessly supported by the L-shaped\narchitecture. It occurs in a number of different ways, which\ndemonstrates the expressiveness of the model with regard to\nthe necessary forms of information encountered in real-time\ncooperative and embedded systems.\nSmart sensors produce events which report on the\nenvironment. Body sensors produce events which report on\nthe body. They are disseminated by the local event layer\nmodule, on an event channel (EC) propagated through the\nregular network, to any relevant remote event layer\nmodules where entities showed an interest on them, normally,\nsentient objects attached to the respective local event layer\nmodules.\nSentient objects consume events they are interested in,\nprocess them, and produce other events. Some of these\nevents are destined to other sentient objects. They are\npublished on an EC using the same EC dialect that serves, e.g.,\nsensor originated events. However, these events are\nsemantically of a kind such that they are to be subscribed by\nthe relevant sentient objects, for example, the sentient\nobjects composing a robot controller system, or, at a higher\nlevel, the sentient objects composing the actual robots in\n32\na cooperative application. Smart actuators, on the other\nhand, merely consume events produced by sentient objects,\nwhereby they accept and execute actuation commands.\nAlternatively to talking to other sentient objects, sentient\nobjects can produce events of a lower level, for example,\nactuation commands on the body or environment. They\npublish these exactly the same way: on an event channel\nthrough the local event layer representative. Now, if these\ncommands are of concern to local actuator units (e.g., body,\nincluding internal operational networks), they are passed on\nto the local translation layer. If they are of concern to a\nremote smart actuator, they are disseminated through the\ndistributed event layer, to reach the former. In any case,\nif they are also of interest to other entities, such as other\nsentient objects that wish to be informed of the actuation\ncommand, then they are also disseminated through the EC\nto these sentient objects.\nA key advantage of this architecture is that event-messages\nand physical events can be globally ordered, if necessary,\nsince they all pass through the event layer. The model also\noffers opportunities to solve a long lasting problem in\nrealtime, computer control, and embedded systems: the\ninconsistency between message passing and the feedback loop\ninformation flow subsystems.\n6. TEMPORAL ASPECTS OF THE\nINTERACTIONS\nAny interaction needs some form of predictability. If safety\ncritical scenarios are considered as it is done in CORTEX,\ntemporal aspects become crucial and have to be made\nexplicit. The problem is how to define temporal constraints\nand how to enforce them by appropriate resource usage in a\ndynamic ad-hoc environment. In an system where\ninteractions are spontaneous, it may be also necessary to determine\ntemporal properties dynamically. To do this, the respective\ntemporal information must be stated explicitly and available\nduring run-time. Secondly, it is not always ensured that\ntemporal properties can be fulfilled. In these cases,\nadaptations and timing failure notification must be provided [2,\n28]. In most real-time systems, the notion of a deadline\nis the prevailing scheme to express and enforce timeliness.\nHowever, a deadline only weakly reflect the temporal\ncharacteristics of the information which is handled. Secondly, a\ndeadline often includes implicit knowledge about the system\nand the relations between activities. In a rather well defined,\nclosed environment, it is possible to make such implicit\nassumptions and map these to execution times and deadlines.\nE.g. the engineer knows how long a vehicle position can be\nused before the vehicle movement outdates this information.\nThus he maps this dependency between speed and position\non a deadline which then assures that the position error\ncan be assumed to be bounded. In a open environment, this\nimplicit mapping is not possible any more because, as an\nobvious reason, the relation between speed and position, and\nthus the error bound, cannot easily be reverse engineered\nfrom a deadline. Therefore, our event model includes\nexplicit quality attributes which allow to specify the temporal\nattributes for every individual event. This is of course an\noverhead compared to the use of implicit knowledge, but in\na dynamic environment such information is needed.\nTo illustrate the problem, consider the example of the\nposition of a vehicle. A position is a typical example for\ntime, value entity [30]. Thus, the position is useful if we\ncan determine an error bound which is related to time, e.g. if\nwe want a position error below 10 meters to establish a safety\nproperty between cooperating cars moving with 5 m/sec,\nthe position has a validity time of 2 seconds. In a time,\nvalue entity entity we can trade time against the precision\nof the value. This is known as value over time and time over\nvalue [18]. Once having established the time-value relation\nand captured in event attributes, subscribers of this event\ncan locally decide about the usefulness of an information. In\nthe GEAR architecture temporal validity is used to reason\nabout safety properties in a event-based system [29]. We\nwill briefly review the respective notions and see how they\nare exploited in our COSMIC event middleware.\nConsider the timeline of generating an event representing\nsome real-time entity [18] from its occurrence to the\nnotification of a certain sentient object (Figure 3). The real-time\nentity is captured at the sensor interface of the system and\nhas to be transformed in a form which can be treated by a\ncomputer. During the time interval t0 the sensor reads the\nreal-time entity and a time stamp is associated with the\nrespective value. The derived time, value entity represents\nan observation. It may be necessary to perform substantial\nlocal computations to derive application relevant\ninformation from the raw sensor data. However, it should be noted\nthat the time stamp of the observation is associated with\nthe capture time and thus independent from further signal\nprocessing and event generation. This close relationship\nbetween capture time and the associated value is supported by\nsmart sensors described above.\nThe processed sensor information is assembled in an event\ndata structure after ts to be published to an event channel.\nAs is described later, the event includes the time stamp of\ngeneration and the temporal validity as attributes.\nThe temporal validity is an application defined measure\nfor the expiration of a time, value . As we explained in\nthe example of a position above, it may vary dependent on\napplication parameters. Temporal validity is a more general\nconcept than that of a deadline. It is independent of a\ncertain technical implementation of a system. While deadlines\nmay be used to schedule the respective steps in an event\ngeneration and dissemination, a temporal validity is an\nintrinsic property of a time, value entity carried in an event.\nA temporal validity allows to reason about the usefulness\nof information and is beneficial even in systems in which\ntimely dissemination of events cannot be enforced because\nit enables timing failure detection at the event consumer. It\nis obvious that deadlines or periods can be derived from the\ntemporal validity of an event. To set a deadline, knowledge\nof an implementation, worst case execution times or\nmessage dissemination latencies is necessary. Thus, in the\ntimeline of Figure 3 every interval may have a deadline. Event\ndissemination through soft real-time channels in COSMIC\nexploits the temporal validity to define dissemination\ndeadlines. Quality attributes can be defined, for instance, in\nterms of validity interval, omission degree pairs. These\nallow to characterize the usefulness of the event for a certain\napplication, in a certain context. Because of that, quality\nattributes of an event clearly depend on higher level issues,\nsuch as the nature of the sentient object or of the smart\nsensor that produced the event. For instance, an event\ncontaining an indication of some vehicle speed must have\ndifferent quality attributes depending on the kind of vehicle\n33\nreal-world\nevent\nobservation:\n