text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: Fairness of Machine Learning in Search Engines Abstract: ABSTRACTFairness has gained increasing importance in a variety of AI and machine learning contexts. As one of the most ubiquitous applications of machine learning, search engines mediate much of the information experiences of members of society. Consequently, understanding and mitigating potential algorithmic unfairness in search have become crucial for both users and systems. In this tutorial, we will introduce the fundamentals of fairness in machine learning, for both supervised learning such as classification and ranking, and unsupervised learning such as clustering. We will then present the existing work on fairness in search engines, including the fairness definitions, evaluation metrics, and taxonomies of methodologies. This tutorial will help orient information retrieval researchers to algorithmic fairness, provide an introduction to the growing literature on this topic, and gathering researchers and practitioners interested in this research direction.
710,723
Title: PAS: Privacy Algorithms in Systems Abstract: ABSTRACTToday we face an explosion of data generation, ranging from health monitoring to national security infrastructure systems. More and more systems are connected to the Internet that collects data at regular time intervals. These systems share data and use machine learning methods for intelligent decisions, which resulted in numerous real-world applications (e.g., autonomous vehicles, recommendation systems, and heart-rate monitoring) that have benefited from it. However, these approaches are prone to identity thief and other privacy related cyber-security attacks. So, how can data privacy be protected efficiently in these scenarios? More dedicated efforts are needed to propose the integration of privacy techniques into existing systems and develop more advanced privacy techniques to address the complex challenges of multi-system connectivity and data fusion. Therefore, we have introduced Privacy Algorithms in Systems (PAS) at CIKM which provides a venue to gather academic researchers and industry researchers/practitioners to present their research in an effort to advance the frontier of this critical direction of privacy algorithms in systems.
710,724
Title: The 1st International Workshop on Federated Learning with Graph Data (FedGraph) Abstract: ABSTRACTThe field of graph data mining, one of the most important AI research areas, has been revolutionized by graph neural networks (GNNs), which benefit from training on real-world graph data with millions to billions of nodes and links. Unfortunately, the training data and process of GNNs involving graphs beyond millions of nodes are extremely costly on a centralized server, if not impossible. Moreover, due to the increasing concerns about data privacy, emerging data from realistic applications are naturally fragmented, forming distributed private graphs of multiple ''data silos", among which direct transferring of data is forbidden. The nascent field of federated learning (FL), which aims to enable individual clients to jointly train their models while keeping their local data decentralized and completely private, is a promising paradigm for large-scale distributed and private training of GNNs. øurs aims to bring together researchers from different backgrounds with a common interest in how to extend current FL algorithms to operate with graph data models such as GNNs. FL is an extremely hot topic of large commercial interest and has been intensively explored for machine learning with visual and textual data. The exploration from graph mining researchers and industrial practitioners is timely catching up just recently. There are many unexplored challenges and opportunities, which urges the establishment of an organized and open community to collaboratively advance the science behind it. The prospective participants of this workshop will include researchers and practitioners from both graph mining and federated learning communities, whose interests include, but are not limited to: graph analysis and mining, heterogeneous network modeling, complex data mining, large-scale machine learning, distributed systems, optimization, meta-learning, reinforcement learning, privacy, robustness, explainability, fairness, ethics, and trustworthiness.
710,725
Title: The Third Workshop on Data-driven Intelligent Transportation Abstract: ABSTRACTTraffic is the pulse of the city. Transportation systems can involve humans, vehicles, shipments, information technology, and the physical infrastructure, all interacting in complex ways. Intelligent transportation enables the city to function in a more efficient and effective way. A wide range of city data become increasingly available, such as taxi trips, surveillance camera data, human mobility data from mobile phones or location-based services, events from social media, car accident reports, bike-sharing information, Points-Of-Interest, traffic sensors, public transportation data, and many more. This abundance of data poses a grand challenge to the CIKM research community: How to utilize such data toward city intelligence, across various transportation tasks? The 3rd workshop of "Data-driven Intelligent Transportation" welcomes articles and presentations in the areas of transportation systems, data mining, and artificial intelligence, conveying new advances and developments in theory, modeling, simulation, testing, case studies, as well as large-scale deployment.
710,726
Title: Applied Machine Learning Methods for Time Series Forecasting Abstract: ABSTRACTTime series data is ubiquitous, and accurate time series forecasting is vital for many real-world application domains, including retail, healthcare, supply chain, climate science, e-commerce and economics. Forecasting, in general, has led to broad impact and a diverse range of applications. However, with large-scale, high-dimensional time-series data available, more advanced techniques must be invented or improved for highly accurate predictions. Latest data mining and machine learning techniques play a crucial role in the next generation of forecasting models. In this Applied Machine Learning Methods for Time Series Forecasting (AMLTS) workshop, we focus on effective and accurate latest machine learning approaches to solve various real-world problems. With this workshop's ability to attract audiences across various domains, we invite experienced industrial practitioners and researchers to help uncover new approaches and break new ground in time-series modelings' challenging and vital settings.
710,727
Title: The 2nd Workshop on Mixed-Initiative ConveRsatiOnal Systems (MICROS) Abstract: ABSTRACTThe Mixed-Initiative ConveRsatiOnal Systems workshop (MICROS) aims at bringing novel ideas and investigating new solutions on conversational assistant systems. The increasing popularity of personal assistant systems, as well as smartphones, has changed the way users access online information, posing new challenges for information seeking and filtering. MICROS has a particular focus on mixed-initiative conversational systems, namely, systems that can provide answers in a proactive way (e.g., asking for clarification or proposing possible interpretations for ambiguous and vague requests). We invite people working on conversational systems or interested in the workshop topics to send us their position and research manuscripts.
710,728
Title: Deep Learning for Search and Recommendation Abstract: ABSTRACTIn the current digital world, web search engines and recommendation systems are continuously evolving, opening up new potential challenges every day which require more sophisticated and efficient data mining and machine learning solutions to satisfy the needs of sellers and consumers as well as marketers. The quality of search and recommendation systems impacts customer retention, time on site, and sales volume. For instance, with often sparse conversion rates, highly personalized contents, heterogeneous digital sources, more rigorous and effective models are required to be developed by research engineers and data scientists. At the same time, deep learning has started to show great impact in many industrial applications which are capable of processing complicated, large-scale and real-time data. Deep learning not only provides more opportunities to increase conversion rates and improve revenue through a positive customer experience, but also provides customers with personalized contents along with their personal shopping journey. Due to this rapid growth of the digital world, there is a need to bring professionals together from both academic research and the industry to solve real-world problems. This workshop fosters the development of a strong research community focused on solving deep learning based large-scale web search, personalized search, recommendation and ranking relevance problems that provide superior digital experience to all users.
710,729
Title: TrustLOG: The First Workshop on Trustworthy Learning on Graphs Abstract: ABSTRACTLearning on graphs (LOG) plays a pivotal role in various high-impact application domains. The past decades have developed tremendous theories, algorithms, and open-source systems in answering what/who questions on graphs. However, recent studies reveal that the state-of-the-art techniques for learning on graphs (LOG) are often not trustworthy in practice with respect to several social aspects (e.g., fairness, transparency, security). A natural research question to ask is: how can we make learning algorithms on graphs trustworthy? To answer this question, we propose a paradigm shift, from answering what and who LOG questions to understanding how and why LOG questions. The TrustLOG workshop provides a venue for presenting, discussing, and promoting frontier research on trustworthy learning on graphs. Moreover, TrustLOG will serve as an impulse for the LOG community to identify novel research problems and shed new light on future directions.
710,730
Title: Workshop on Proactive and Agent-Supported Information Retrieval (PASIR) Abstract: ABSTRACTEstablished information retrieval (IR) systems are generally reactive in that they respond to active entry of a search query by a user. Information is thus only provided to a user when they identify a need for information, and invest the effort to address this need using a search engine. As such, users end up spending more effort to satisfy their information need. Different from reactive systems, proactive information retrieval (PIR) systems seek to retrieve relevant content without the user explicitly submitting a query. A PIR system usually achieves this by using a combination of observed user activities, their contexts and user profile information firstly to automatically create search queries, and then perform search operations and present retrieved results. This new paradigm of PIR methodology thus presents opportunities for investigating a number of research questions for improving user satisfaction, such as when would be the right opportunities to deliver information to users, what should be the presentation mode of search results, establishing a conversational communication channel with the users to ask clarifying questions to refine the retrieved information, and even providing users with explanations for the retrieved information. The aim of the workshop thus is to solicit submissions from the research community targeted towards these topics, and consolidate the research directions in this domain via exchange of ideas.
710,731
Title: THECOG 2022 - Transforms In Behavioral And Affective Computing (Revisited) Abstract: ABSTRACTHuman decision making is central in many functions across a broad spectrum of fields such as marketing, investment, smart contract formulations, political campaigns, and organizational strategic management. Behavioral economics seeks to study the psychological, cultural, and social factors contributing to decision making along reasoning. It should be highlighted here that behavioral economics do not negate classical economic theory but rather extend it in two distinct directions. First, a finer granularity can be obtained by studying the decision making process not of massive populations but instead of individuals and groups with signal estimation or deep learning techniques based on a wide array of attributes ranging from social media posts to physiological signs. Second, time becomes a critical parameter and changes to the disposition towards alternative decisions can be tracked with input-output or state space models. The primary findings so far are concepts like bounded rationality and perceived risk, while results include optimal strategies for various levels of information awareness and action strategies based on perceived loss aversion principles. From the above it follows that behavioral economics relies on deep learning, signal processing, control theory, social media analysis, affective computing, natural language processing, and gamification to name only a few fields. Therefore, it is directly tied to computer science in many ways. THECOG will be a central meeting point for researchers of various backgrounds in order to generate new interdisciplinary and groundbreaking results.
710,732
Title: Workshop on Human-in-the-loop Data Curation Abstract: ABSTRACTAlthough data quality is a long-standing and enduring problem, it has recently received a resurgence of attention due to the fast proliferation of data analytics, machine learning, and decision-support applications built upon the wide-scale availability and accessibility of (big) data. The success of such applications heavily relies on not only the quantity, but also the quality of data. Data curation, which may include annotation, cleaning, transformation, integration, etc., is a critical step to provide adequate assurances on the quality of analytics and machine learning results. Such data preparation activities are recognised as time and resource intensive for data scientists as data often comes with a number of challenges that need to be tackled before it can be used in practice. Data re-purposing and the resulting distance between design and use intentions of the data, is a fundamental issue behind many of these challenges. These challenges include a variety of data issues such as noise and outliers, incompleteness, representativeness or biases, heterogeneity of format or semantics, etc. Mishandling these challenges can lead to negative and sometimes damaging effects, especially in critical domains like healthcare, transport, and finance. An observable distinct feature of data quality in these contexts is the increasingly important role played by humans, being often the source of data generation and the active players in data curation. This workshop will provide an opportunity to explore the interdisciplinary overlap between manual, automated, and hybrid human-machine methods of data curation.
710,733
Title: AIMLAI: Advances in Interpretable Machine Learning and Artificial Intelligence Abstract: ABSTRACTRecent technological advances rely on accurate decision support systems that can be perceived as black boxes due to their overwhelming complexity. This lack of transparency can lead to technical, ethical, legal, and trust issues. For example, if the control module of a self-driving car failed at detecting a pedestrian, it becomes crucial to know why the system erred. In some other cases, the decision system may reflect unacceptable biases that can generate distrust. The General Data Protection Regulation (GDPR), approved by the European Parliament in 2018, suggests that individuals should be able to obtain explanations of the decisions made from their data by automated processing, and to challenge those decisions. All these reasons have given rise to the domain of interpretable and explainable AI. AIMLAI aims at gathering researchers, experts and professionals, from inside and outside the domain of AI, interested in the topic of interpretable ML and interpretable AI. The workshop encourages interdisciplinary collaborations, with particular emphasis in knowledge management, Infovis, human computer interaction and psychology. It also welcomes applied research for use cases where interpretability matters. AIMLAI envisions to become a discussion venue for the advent of novel interpretable algorithms and explainability modules that mediate the communication between complex ML/AI systems and users.
710,734
Title: Optimizing the Timing of Intelligent Suggestion in Virtual Reality Abstract: ABSTRACT Intelligent suggestion techniques can enable low-friction selection-based input within virtual or augmented reality (VR/AR) systems. Such techniques leverage probability estimates from a target prediction model to provide users with an easy-to-use method to select the most probable target in an environment. For example, a system could highlight the predicted target and enable a user to select it with a simple click. However, as the probability estimates can be made at any time, it is unclear when an intelligent suggestion should be presented. Earlier suggestions could save a user time and effort but be less accurate. Later suggestions, on the other hand, could be more accurate but save less time and effort. This paper thus proposes a computational framework that can be used to determine the optimal timing of intelligent suggestions based on user-centric costs and benefits. A series of studies demonstrated the value of the framework for minimizing task completion time and maximizing suggestion usage and showed that it was both theoretically and empirically effective at determining the optimal timing for intelligent suggestions.
710,735
Title: Integrating Real-World Distractions into Virtual Reality Abstract: ABSTRACTWith the proliferation of consumer-level virtual reality (VR) devices, users started experiencing VR in less controlled environments, such as in social gatherings and public areas. While the current VR hardware provides an increasingly immersive experience, it ignores stimuli originating from the physical surroundings that distract users from the VR experience. To block distractions from the outside world, many users wear noise-canceling headphones. However, this is insufficient to block loud or transient sounds (e.g., drilling or hammering) and, especially, multi-modal distractions (e.g., air drafts, temperature shifts from an A/C, construction vibrations, or food smells). To tackle this, we explore a new concept, where we directly integrate the distracting stimuli from the user's physical surroundings into their virtual reality experience to enhance presence. Using our approach, an otherwise distracting wind gust can be directly mapped to the sway of trees in a VR experience that already contains trees. Using our novel approach, we demonstrate how to integrate a range of distractive stimuli into the VR experience, such as haptics (temperature, vibrations, touch), sounds, and smells. To validate our approach, we conducted three user studies and a technical evaluation. First, to validate our key principle, we conducted a controlled study where participants were exposed to distractions while playing a VR game. We found that our approach improved users’ sense of presence, compared to wearing noise-canceling headphones. From these results, we engineered a sensing module that detects a set of simple distractive signals (e.g., sounds, winds, and temperature shifts). We validated our hardware in a technical evaluation and in an out-of-lab study where participants played VR games in an uncontrolled environment. Moreover, to gather the perspective of VR content creators that might one day utilize a system inspired by our findings, we invited game designers to use our approach and collected their feedback and VR designs. Finally, we present design considerations for mapping distracting external stimuli and discuss ethical considerations of integrating real-world stimuli into virtual reality.
710,736
Title: RIDS: Implicit Detection of a Selection Gesture Using Hand Motion Dynamics During Freehand Pointing in Virtual Reality Abstract: ABSTRACT Freehand interactions with augmented and virtual reality are growing in popularity, but they lack reliability and robustness. Implicit behavior from users, such as hand or gaze movements, might provide additional signals to improve the reliability of input. In this paper, the primary goal is to improve the detection of a selection gesture in VR during point-and-click interaction. Thus, we propose and investigate the use of information contained within the hand motion dynamics that precede a selection gesture. We built two models that classified if a user is likely to perform a selection gesture at the current moment in time. We collected data during a pointing-and-selection task from 15 participants and trained two models with different architectures, i.e., a logistic regression classifier was trained using predefined hand motion features and a temporal convolutional network (TCN) classifier was trained using raw hand motion data. Leave-one-subject-out cross-validation PR-AUCs of 0.36 and 0.90 were obtained for each model respectively, demonstrating that the models performed well above chance (=0.13). The TCN model was found to improve the precision of a noisy selection gesture by 11.2% without sacrificing recall performance. An initial analysis of the generalizability of the models demonstrated above-chance performance, suggesting that this approach could be scaled to other interaction tasks in the future.
710,737
Title: DEEP: 3D Gaze Pointing in Virtual Reality Leveraging Eyelid Movement Abstract: ABSTRACT Gaze-based target suffers from low input precision and target occlusion. In this paper, we explored to leverage the continuous eyelid movement to support high-efficient and occlusion-robust dwell-based gaze pointing in virtual reality. We first conducted two user studies to examine the users’ eyelid movement pattern both in unintentional and intentional conditions. The results proved the feasibility of leveraging intentional eyelid movement that was distinguishable with natural movements for input. We also tested the participants’ dwelling pattern for targets with different sizes and locations. Based on these results, we propose DEEP, a novel technique that enables the users to see through occlusions by controlling the aperture angle of their eyelids and dwell to select the targets with the help of a probabilistic input prediction model. Evaluation results showed that DEEP with dynamic depth and location selection incorporation significantly outperformed its static variants, as well as a naive dwelling baseline technique. Even for 100% occluded targets, it could achieve an average selection speed of 2.5s with an error rate of 2.3%.
710,738
Title: PassengXR: A Low Cost Platform for Any-Car, Multi-User, Motion-Based Passenger XR Experiences Abstract: ABSTRACTWe present PassengXR, an open-source toolkit for creating passenger eXtended Reality (XR) experiences in Unity. XR allows travellers to move beyond the physical limitations of in-vehicle displays, rendering immersive virtual content based on - or ignoring - vehicle motion. There are considerable technical challenges to using headsets in moving environments: maintaining the forward bearing of IMU-based headsets; conflicts between optical and inertial tracking of inside-out headsets; obtaining vehicle telemetry; and the high cost of design given the necessity of testing in-car. As a consequence, existing vehicular XR research typically relies on controlled, simple routes to compensate. PassengXR is a cost-effective open-source in-car passenger XR solution. We provide a reference set of COTS hardware that enables the broadcasting of vehicle telemetry to multiple headsets. Our software toolkit then provides support to correct vehicle-headset alignment, and then create a variety of passenger XR experiences, including: vehicle-locked content; motion- and location-based content; and co-located multi-passenger applications. PassengXR also supports the recording and playback of vehicle telemetry, assisting offline design without resorting to costly in-car testing. Through an evaluation-by-demonstration, we show how our platform can assist practitioners in producing novel, multi-user passenger XR experiences.
710,739
Title: Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UI Abstract: ABSTRACT This paper introduces Sketched Reality, an approach that combines AR sketching and actuated tangible user interfaces (TUI) for bi-directional sketching interaction. Bi-directional sketching enables virtual sketches and physical objects to “affect” each other through physical actuation and digital computation. In the existing AR sketching, the relationship between virtual and physical worlds is only one-directional — while physical interaction can affect virtual sketches, virtual sketches have no return effect on the physical objects or environment. In contrast, bi-directional sketching interaction allows the seamless coupling between sketches and actuated TUIs. In this paper, we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching tool to demonstrate the concept. In our system, virtual sketches drawn and simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move, actuate, collide, and constrain physical Toio robots, as if virtual sketches and the physical objects exist in the same space through seamless coupling between AR and robot motion. This paper contributes a set of novel interactions and a design space of bi-directional AR sketching. We demonstrate a series of potential applications, such as tangible physics education, explorable mechanism, tangible gaming for children, and in-situ robot programming via sketching.
710,740
Title: iWood: Makeable Vibration Sensor for Interactive Plywood Abstract: ABSTRACTiWood is interactive plywood that can sense vibration based on triboelectric effect. As a material, iWood survives common woodworking operations, such as sawing, screwing, and nailing and can be used to create furniture and artifacts. Things created using iWood inherit its sensing capability and can detect a variety of user input and activities based on their unique vibration patterns. Through a series of experiments and machine simulations, we carefully chose the size of the sensor electrodes, the type of triboelectric materials, and the bonding method of the sensor layers to optimize the sensitivity and fabrication complexity. The sensing performance of iWood was evaluated with 4 gestures and 12 daily activities carried out on a table, nightstand, and cutting board, all created using iWood. Our result suggested over 90% accuracies for activity and gesture recognition.
710,741
Title: AirLogic: Embedding Pneumatic Computation and I/O in 3D Models to Fabricate Electronics-Free Interactive Objects Abstract: ABSTRACT Researchers have developed various tools and techniques towards the vision of on-demand fabrication of custom, interactive devices. Recent work has 3D-printed artefacts like speakers, electromagnetic actuators, and hydraulic robots. However, these are non-trivial to instantiate as they require post-fabrication mechanical– or electronic assembly. We introduce AirLogic: a technique to create electronics-free, interactive objects by embedding pneumatic input, logic processing, and output widgets in 3D-printable models. AirLogic devices can perform basic computation on user inputs and create visible, audible, or haptic feedback; yet they do not require electronic circuits, physical assembly, or resetting between uses. Our library of 13 exemplar widgets can embed AirLogic-style computational capabilities in existing 3D models. We evaluate our widgets’ performance—quantifying the loss of airflow (1) in each widget type, (2) based on printing orientation, and (3) from internal object geometry. Finally, we present five applications that illustrate AirLogic’s potential.
710,742
Title: Flaticulation: Laser Cutting Joints with Articulated Angles Abstract: ABSTRACTWe present Flaticulation, a method to laser cut joints that clutch two cut-in-place flat boards at designated articulated angles. We discover special T-patterns added on the shared edge of two pieces allowing them to be clutched at a bending angle. We analyze the structure and propose a parametric model regarding the T-pattern under laser cutting to predict the joint articulated angle. We validate our proposed model by measuring real prototypes and conducting stress-strain analysis to understand their structural strength. Finally, we provide a user interface for our example applications, including fast assembling unfolded 3D polygonal models and adding detent mechanisms for functional objects such as a mouse and reconfigurable objects such as a headphone.
710,743
Title: Synthesis-Assisted Video Prototyping From a Document Abstract: ABSTRACT Video productions commonly start with a script, especially for talking head videos that feature a speaker narrating to the camera. When the source materials come from a written document – such as a web tutorial, it takes iterations to refine content from a text article to a spoken dialogue, while considering visual compositions in each scene. We propose Doc2Video, a video prototyping approach that converts a document to interactive scripting with a preview of synthetic talking head videos. Our pipeline decomposes a source document into a series of scenes, each automatically creating a synthesized video of a virtual instructor. Designed for a specific domain – programming cookbooks, we apply visual elements from the source document, such as a keyword, a code snippet or a screenshot, in suitable layouts. Users edit narration sentences, break or combine sections, and modify visuals to prototype a video in our Editing UI. We evaluated our pipeline with public programming cookbooks. Feedback from professional creators shows that our method provided a reasonable starting point to engage them in interactive scripting for a narrated instructional video.
710,744
Title: OmniScribe: Authoring Immersive Audio Descriptions for 360° Videos Abstract: ABSTRACT Blind people typically access videos via audio descriptions (AD) crafted by sighted describers who comprehend, select, and describe crucial visual content in the videos. 360° video is an emerging storytelling medium that enables immersive experiences that people may not possibly reach in everyday life. However, the omnidirectional nature of 360° videos makes it challenging for describers to perceive the holistic visual content and interpret spatial information that is essential to create immersive ADs for blind people. Through a formative study with a professional describer, we identified key challenges in describing 360° videos and iteratively designed OmniScribe, a system that supports the authoring of immersive ADs for 360° videos. OmniScribe uses AI-generated content-awareness overlays for describers to better grasp 360° video content. Furthermore, OmniScribe enables describers to author spatial AD and immersive labels for blind users to consume the videos immersively with our mobile prototype. In a study with 11 professional and novice describers, we demonstrated the value of OmniScribe in the authoring workflow; and a study with 8 blind participants revealed the promise of immersive AD over standard AD for 360° videos. Finally, we discuss the implications of promoting 360° video accessibility.
710,745
Title: Record Once, Post Everywhere: Automatic Shortening of Audio Stories for Social Media Abstract: ABSTRACT Following the prevalence of short-form video, short-form voice content has emerged on social media platforms like Twitter and Facebook. A challenge that creators face is hard constraints on the content length. If the initial recording is not short enough, they need to re-record or edit their content. Both are time-consuming, and the latter, if supported, can have a learning curve. Moreover, creators need to manually create multiple versions to publish content on platforms with different length constraints. To simplify this process, we present ROPE1 (Record Once, Post Everywhere). Creators can record voice content once, and our system will automatically shorten it to all length limits by removing parts of the recording for each target. We formulate this as a combinatorial optimization problem and propose a novel algorithm that automatically selects optimal sentence combinations from the original content to comply with each length constraint. Creators can customize the algorithmically shortened content by specifying sentences to include or exclude. Our system can also use the user-specified constraints to recompute and provides a new version. We conducted a user study comparing ROPE with a sentence-based manual editing baseline. The results show that ROPE can generate high-quality edits, alleviating the cognitive loads of creators for shortening content. While our system and user study address short-form voice content specifically, we believe that the same concept can also be applied to other media such as video with narration and dialog.
710,746
Title: Sketch-Based Design of Foundation Paper Pieceable Quilts Abstract: ABSTRACTFoundation paper piecing is a widely used quilt-making technique in which fabric pieces are sewn onto a paper guide to facilitate construction. But, designing paper pieceable quilt patterns is challenging because the sewing process imposes constraints on both the geometry and sewing order of the fabric pieces. Based on a formative study with expert quilt designers, we develop a novel sketch-based tool for designing such quilt patterns. Our tool lets designers sketch a partial design as a set of edges, which may intersect but do not have to form closed polygons, and our tool automatically completes it into a fully paper pieceable pattern. We contribute a new sketch-completion algorithm that extends the input sketched edges into a planar mesh composed of closed polygonal faces representing fabric pieces, determines a paper pieceable sewing order for the faces, and breaks complicated sketches into independently paper pieceable sections when necessary. A partial input design often admits multiple visually different completions. Thus, our tool lets designers specify completion heuristics, which are based on current quilt design practices, to control the appearance of the completed quilt. Initial user evaluations with novice and expert quilt designers suggest that our tool fits within current design workflows and greatly facilitates designing foundation paper pieceable quilts by allowing users to focus on the visual design rather than tedious constraint checks.
710,747
Title: X-Bridges: Designing Tunable Bridges to Enrich 3D Printed Objects' Deformation and Stiffness Abstract: ABSTRACTBridges are unique structures appeared in fused deposition modeling (FDM) that make rigid prints flexible but not fully explored. This paper presents X-Bridges, an end-to-end workflow that allows novice users to design tunable bridges that can enrich 3D printed objects' deformable and physical properties. Specifically, we firstly provide a series of deformation primitives (e.g. bend, twist, coil, compress and stretch) with three versions of stiffness (loose, elastic, stable) based on parametrized bridging experiments. Embedding the printing parameters, a design tool is developed to modify the imported 3D model, evaluate optimized printing parameters for bridges, preview shape-changing process, and generate the G-code file for 3D printing. Finally, we demonstrate the design space of X-Bridges through a set of applications that enable foldable, resilient, and interactive shape-changing objects.
710,748
Title: Prototyping Soft Devices with Interactive Bioplastics Abstract: ABSTRACT Designers and makers are increasingly interested in leveraging bio-based and bio-degradable ‘do-it-yourself’ (DIY) materials for sustainable prototyping. Their self-produced bioplastics possess compelling properties such as self-adhesion but have so far not been functionalized to create soft interactive devices, due to a lack of DIY techniques for the fabrication of functional electronic circuits and sensors. In this paper, we contribute a DIY approach for creating Interactive Bioplastics that is accessible to a wide audience, making use of easy-to-obtain bio-based raw materials and familiar tools. We present three types of conductive bioplastic materials and their formulation: sheets, pastes and foams. Our materials enable additive and subtractive fabrication of soft circuits and sensors. Furthermore, we demonstrate how these materials can substitute conventional prototyping materials, be combined with off-the-shelf electronics, and be fed into a sustainable material ‘life-cycle’ including disassembly, re-use, and re-melting of materials. A formal characterization of our conductors highlights that they are even on-par with commercially available carbon-based conductive pastes.
710,749
Title: WaddleWalls: Room-scale Interactive Partitioning System using a Swarm of Robotic Partitions Abstract: ABSTRACTWe propose WaddleWalls, a room-scale interactive partitioning system using a swarm of robotic partitions that allows occupants to interactively reconfigure workspace partitions to satisfy their privacy and interaction needs. The system can automatically arrange the partitions’ layout designed by the user on demand. The user specifies the target partition’s position, orientation, and height using the controller’s 3D manipulations. In this work, we discuss the design considerations of the interactive partition system and implement WaddleWalls’ proof-of-concept prototype assembled with off-the-shelf materials. We demonstrate the functionalities of WaddleWalls through several application scenarios in an open-planned office environment. We also conduct an initial user evaluation that compares WaddleWalls with conventional wheeled partitions, finding that WaddleWalls allows effective workspace partitioning and mitigates the physical and temporal efforts needed to fulfill ad hoc social and privacy requirements. Finally, we clarify the feasibility, potential, and future challenges of WaddleWalls through an interview with experts.
710,750
Title: Integrating Living Organisms in Devices to Implement Care-based Interactions Abstract: ABSTRACTResearchers have been exploring how incorporating care-based interactions can change the user's attitude & relationship towards an interactive device. This is typically achieved through virtual care where users care for digital entities. In this paper, we explore this concept further by investigating how physical care for a living organism, embedded as a functional component of an interactive device, also changes user-device relationships. Living organisms differ as they require an environment conducive to life, which in our concept, the user is responsible for providing by caring for the organism (e.g., feeding it). We instantiated our concept by engineering a smartwatch that includes a slime mold that physically conducts power to a heart rate sensor inside the device, acting as a living wire. In this smartwatch, the availability of heart-rate sensing depends on the health of the slime mold—with the user's care, the slime mold becomes conductive and enables the sensor; conversely, without care, the slime mold dries and disables the sensor (resuming care resuscitates the slime mold). To explore how our living device was perceived by users, we conducted a study where participants wore our slime mold-integrated smartwatch for 9-14 days. We found that participants felt a sense of responsibility, developed a reciprocal relationship, and experienced the organism's growth as a source of affect. Finally, to allow engineers and designers to expand on our work, we abstract our findings into a set of technical and design recommendations when engineering an interactive device that incorporates this type of care-based relationship.
710,751
Title: NFCStack: Identifiable Physical Building Blocks that Support Concurrent Construction and Frictionless Interaction Abstract: ABSTRACT In this paper, we propose NFCStack, which is a physical building block system that supports stacking and frictionless interaction and is based on near-field communication (NFC). This system consists of a portable station that can support and resolve the order of three types of passive identifiable stackable: bricks, boxes, and adapters. The bricks support stable and sturdy physical construction, whereas the boxes support frictionless tangible interactions. The adapters provide an interface between the aforementioned two types of stackable and convert the top of a stack into a terminal for detecting interactions between NFC-tagged objects. In contrast to existing systems based on NFC or radio-frequency identification technologies, NFCStack is portable, supports simultaneous interactions, and resolves stacking and interaction events responsively, even when objects are not strictly aligned. Evaluation results indicate that the proposed system effectively supports 12 layers of rich-ID stacking with the three types of building block, even if every box is stacked with a 6-mm offset. The results also indicate possible generalized applications of the proposed system, including 2.5-dimensional construction. The interaction styles are described using several educational application examples, and the design implications of this research are explained.
710,752
Title: ForceSight: Non-Contact Force Sensing with Laser Speckle Imaging Abstract: ABSTRACT Force sensing has been a key enabling technology for a wide range of interfaces such as digitally enhanced body and world surfaces for touch interactions. Additionally, force often contains rich contextual information about user activities and can be used to enhance machine perception for improved user and environment awareness. To sense force, conventional approaches rely on contact sensors made of pressure-sensitive materials such as piezo films/discs or force-sensitive resistors. We present ForceSight, a non-contact force sensing approach using laser speckle imaging. Our key observation is that object surfaces deform in the presence of force. This deformation, though very minute, manifests as observable and discernible laser speckle shifts, which we leverage to sense the applied force. This non-contact force-sensing capability opens up new opportunities for rich interactions and can be used to power user-/environment-aware interfaces. We first built and verified the model of laser speckle shift with surface deformations. To investigate the feasibility of our approach, we conducted studies on metal, plastic, wood, along with a wide variety of materials. Additionally, we included supplementary tests to fully tease out the performance of our approach. Finally, we demonstrated the applicability of ForceSight with several demonstrative example applications.
710,753
Title: Diffscriber: Describing Visual Design Changes to Support Mixed-Ability Collaborative Presentation Authoring Abstract: ABSTRACTVisual slide-based presentations are ubiquitous, yet slide authoring tools are largely inaccessible to people who are blind or visually impaired (BVI). When authoring presentations, the 9 BVI presenters in our formative study usually work with sighted collaborators to produce visual slides based on the text content they produce. While BVI presenters valued collaborators’ visual design skill, the collaborators often felt they could not fully review and provide feedback on the visual changes that were made. We present Diffscriber, a system that identifies and describes changes to a slide’s content, layout, and style for presentation authoring. Using our system, BVI presentation authors can efficiently review changes to their presentation by navigating either a summary of high-level changes or individual slide elements. To learn more about changes of interest, presenters can use a generated change hierarchy to navigate to lower-level change details and element styles. BVI presenters using Diffscriber were able to identify slide design changes and provide feedback more easily as compared to using only the slides alone. More broadly, Diffscriber illustrates how advances in detecting and describing visual differences can improve mixed-ability collaboration.
710,754
Title: MuscleRehab: Improving Unsupervised Physical Rehabilitation by Monitoring and Visualizing Muscle Engagement Abstract: ABSTRACTUnsupervised physical rehabilitation traditionally has used motion tracking to determine correct exercise execution. However, motion tracking is not representative of the assessment of physical therapists, which focus on muscle engagement. In this paper, we investigate if monitoring and visualizing muscle engagement during unsupervised physical rehabilitation improves the execution accuracy of therapeutic exercises by showing users whether they target the right muscle groups. To accomplish this, we use wearable electrical impedance tomography (EIT) to monitor muscle engagement and visualize the current state on a virtual muscle-skeleton avatar. We use additional optical motion tracking to also monitor the user’s movement. We conducted a user study with 10 participants that compares exercise execution while seeing muscle + motion data vs. motion data only, and also presented the recorded data to a group of physical therapists for post-rehabilitation analysis. The results indicate that monitoring and visualizing muscle engagement can improve both the therapeutic exercise accuracy during rehabilitation, and post-rehabilitation evaluation for physical therapists.
710,755
Title: Wikxhibit: Using HTML and Wikidata to Author Applications that Link Data Across the Web Abstract: ABSTRACTWikidata is a companion to Wikipedia that captures a substantial part of the information about most Wikipedia entities in machine-readable structured form. In addition to directly representing information from Wikipedia itself, Wikidata also cross-references how additional information about these entities can be accessed through APIs on hundreds of other websites. This trove of valuable information has become a source of numerous domain-specific information presentations on the web, such as art galleries or directories of actors. Developers have created a number of such tools that present Wikidata data, sometimes combined with data accessed through Wikidata’s cross-referenced web APIs. However, the creation of these presentations requires significant programming effort and is often impossible for non-programmers. In this work, we empower users, even non-programmers, to create presentations of Wikidata and other sources of data on the web, using only HTML with no additional programming. We present Wikxhibit, a JavaScript library for creating HTML-based data presentations of data on Wikidata and the other data APIs it cross-references. Wikxhibit allows a user to author plain HTML that, with the addition of a few new attributes, is able to dynamically fetch and display any Wikidata data or its cross-referenced Web APIs. Wikxhibit’s JavaScript library uses Wikidata as the bridge to connect all the cross-referenced web APIs, allowing users to aggregate data from multiple Web APIs at once, seamlessly connecting object to object, without even realizing that they are pulling data from multiple websites. We integrate Wikxhibit with Mavo, an HTML language extension for describing web applications declaratively, to empower plain-HTML authors to create presentations of Wikidata. Our evaluation shows that users, even non-programmers, can create presentations of Wikidata and other sources of web data using Wikxhibit in just 5 minutes.
710,756
Title: ARnnotate: An Augmented Reality Interface for Collecting Custom Dataset of 3D Hand-Object Interaction Pose Estimation Abstract: ABSTRACT Vision-based 3D pose estimation has substantial potential in hand-object interaction applications and requires user-specified datasets to achieve robust performance. We propose ARnnotate, an Augmented Reality (AR) interface enabling end-users to create custom data using a hand-tracking-capable AR device. Unlike other dataset collection strategies, ARnnotate first guides a user to manipulate a virtual bounding box and records its poses and the user’s hand joint positions as the labels. By leveraging the spatial awareness of AR, the user manipulates the corresponding physical object while following the in-situ AR animation of the bounding box and hand model, while ARnnotate captures the user’s first-person view as the images of the dataset. A 12-participant user study was conducted, and the results proved the system’s usability in terms of the spatial accuracy of the labels, the satisfactory performance of the deep neural networks trained with the data collected by ARnnotate, and the users’ subjective feedback.
710,757
Title: Prolonging VR Haptic Experiences by Harvesting Kinetic Energy from the User Abstract: ABSTRACTWe propose a new technical approach to implement untethered VR haptic devices that contain no battery, yet can render on-demand haptic feedback. The key is that via our approach, a haptic device charges itself by harvesting the user's kinetic energy (i.e., movement)—even without the user needing to realize this. This is achieved by integrating the energy-harvesting with the virtual experience, in a responsive manner. Whenever our batteryless haptic device is about to lose power, it switches to harvesting mode (by engaging its clutch to a generator) and, simultaneously, the VR headset renders an alternative version of the current experience that depicts resistive forces (e.g., rowing a boat in VR). As a result, the user feels realistic haptics that corresponds to what they should be feeling in VR, while unknowingly charging the device via their movements. Once the haptic device's supercapacitors are charged, they wake up its microcontroller to communicate with the VR headset. The VR experience can now use the recently harvested power for on-demand haptics, including vibration, electrical or mechanical force-feedback; this process can be repeated, ad infinitum. We instantiated a version of our concept by implementing an exoskeleton (with vibration, electrical & mechanical force-feedback) that harvests the user's arm movements. We validated it via a user study, in which participants, even without knowing the device was harvesting, rated its’ VR experience as more realistic & engaging than with a baseline VR setup. Finally, we believe our approach enables haptics for prolonged uses, especially useful in untethered VR setups, since devices capable of haptic feedback are traditionally only reserved for situations with ample power. Instead, with our approach, a user who engages in hours-long VR and grew accustomed to finding a battery-dead haptic device that no longer works, will simply resurrect the haptic device with their movement.
710,758
Title: ReCapture: AR-Guided Time-lapse Photography Abstract: ABSTRACTWe present ReCapture, a system that leverages AR-based guidance to help users capture time-lapse data with hand-held mobile devices. ReCapture works by repeatedly guiding users back to the precise location of previously captured images so they can record time-lapse videos one frame at a time without leaving their camera in the scene. Building on previous work in computational re-photography, we combine three different guidance modes to enable parallel hand-held time-lapse capture in general settings. We demonstrate the versatility of our system on a wide variety of subjects and scenes captured over a year of development and regular use, and explore different visualizations of unstructured hand-held time-lapse data.
710,759
Title: PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data Abstract: ABSTRACT We present the first toolkit that equips blind and visually impaired (BVI) developers with the tools to create accessible data displays. Called PSST (Physical computing Streaming Sensor data Toolkit), it enables BVI developers to understand the data generated by sensors from a mouse to a micro:bit physical computing platform. By assuming visual abilities, earlier efforts to make physical computing accessible fail to address the need for BVI developers to access sensor data. PSST enables BVI developers to understand real-time, real-world sensor data by providing control over what should be displayed, as well as when to display and how to display sensor data. PSST supports filtering based on raw or calculated values, highlighting, and transformation of data. Output formats include tonal sonification, nonspeech audio files, speech, and SVGs for laser cutting. We validate PSST through a series of demonstrations and a user study with BVI developers.
710,760
Title: Interactive Public Displays and Wheelchair Users: Between Direct, Personal and Indirect, Assisted Interaction Abstract: ABSTRACT We examine accessible interactions for wheelchair users and public displays with three studies. In a first study, we conduct a Systematic Literature Review, from which we report very few scientific papers on this topic and a preponderant focus on touch input. In a second study, we conduct a Systematic Video Review using YouTube as a data source, and unveil accessibility challenges for public displays and several input modalities alternative to direct touch. In a third study, we conduct semi-structured interviews with eleven wheelchair users to understand their experience interacting with public displays and to collect their preferences for more accessible input modalities. Based on our findings, we propose the “assisted interaction” phase to extend Vogel and Balakrishnan’s four-phase interaction model with public displays, and the “ability” dimension for cross-device interaction design to support, via users’ personal mobile devices, independent use of interactive public displays.
710,761
Title: Grid-Coding: An Accessible, Efficient, and Structured Coding Paradigm for Blind and Low-Vision Programmers Abstract: ABSTRACT Sighted programmers often rely on visual cues (e.g., syntax coloring, keyword highlighting, code formatting) to perform common coding activities in text-based languages (e.g., Python). Unfortunately, blind and low-vision (BLV) programmers hardly benefit from these visual cues because they interact with computers via assistive technologies (e.g., screen readers), which fail to communicate visual semantics meaningfully. Prior work on making text-based programming languages and environments accessible mostly focused on code navigation and, to some extent, code debugging, but not much toward code editing, which is an essential coding activity. We present Grid-Coding to fill this gap. Grid-Coding renders source code in a structured 2D grid, where each row, column, and cell have consistent, meaningful semantics. Its design is grounded on prior work and refined by 28 BLV programmers through online participatory sessions for 2 months. We implemented the Grid-Coding prototype as a spreadsheet-like web application for Python and evaluated it with a study with 12 BLV programmers. This study revealed that, compared to a text editor (i.e., the go-to editor for BLV programmers), our prototype enabled BLV programmers to navigate source code quickly, find the context of a statement easily, detect syntax errors in existing code effectively, and write new code with fewer syntax errors. The study also revealed how BLV programmers adopted Grid-Coding and demonstrated novel interaction patterns conducive to increased programming productivity.
710,762
Title: Seeing our Blind Spots: Smart Glasses-based Simulation to Increase Design Students’ Awareness of Visual Impairment Abstract: ABSTRACTAs the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students. Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user’s eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness. In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment.
710,763
Title: RemoteLab: A VR Remote Study Toolkit Abstract: ABSTRACT User studies play a critical role in human subject research, including human-computer interaction. Virtual reality (VR) researchers tend to conduct user studies in-person at their laboratory, where participants experiment with novel equipment to complete tasks in a simulated environment, which is often new to many. However, due to social distancing requirements in recent years, VR research has been disrupted by preventing participants from attending in-person laboratory studies. On the other hand, affordable head-mounted displays are becoming common, enabling access to VR experiences and interactions outside traditional research settings. Recent research has shown that unsupervised remote user studies can yield reliable results, however, the setup of experiment software designed for remote studies can be technically complex and convoluted. We present a novel open-source Unity toolkit, RemoteLab, designed to facilitate the preparation of remote experiments by providing a set of tools that synchronize experiment state across multiple computers, record and collect data from various multimedia sources, and replay the accumulated data for analysis. This toolkit facilitates VR researchers to conduct remote experiments when in-person experiments are not feasible or increase the sampling variety of a target population and reach participants that otherwise would not be able to attend in-person.
710,764
Title: MechARspace: An Authoring System Enabling Bidirectional Binding of Augmented Reality with Toys in Real-time Abstract: ABSTRACT Augmented Reality (AR), which blends physical and virtual worlds, presents the possibility of enhancing traditional toy design. By leveraging bidirectional virtual-physical interactions between humans and the designed artifact, such AR-enhanced toys can provide more playful and interactive experiences for traditional toys. However, designers are constrained by the complexity and technical difficulties of the current AR content creation processes. We propose MechARspace, an immersive authoring system that supports users to create toy-AR interactions through direct manipulation and visual programming. Based on the elicitation study, we propose a bidirectional interaction model which maps both ways: from the toy inputs to reactions of AR content, and also from the AR content to the toy reactions. This model guides the design of our system which includes a plug-and-play hardware toolkit and an in-situ authoring interface. We present multiple use cases enabled by MechARspace to validate this interaction model. Finally, we evaluate our system with a two-session user study where users first recreated a set of predefined toy-AR interactions and then implemented their own AR-enhanced toy designs.
710,765
Title: RealityLens: A User Interface for Blending Customized Physical World View into Virtual Reality Abstract: ABSTRACT Research has enabled virtual reality (VR) users to interact with the physical world by blending the physical world view into the virtual environment. However, current solutions are designed for specific use cases and hence are not capable of covering users’ varying needs for accessing information about the physical world. This work presents RealityLens, a user interface that allows users to peep into the physical world in VR with the reality lenses they deployed for their needs. For this purpose, we first conducted a preliminary study with experienced VR users to identify users’ needs for interacting with the physical world, which led to a set of features for customizing the scale, placement, and activation method of a reality lens. We evaluated the design in a user study (n=12) and collected the feedback of participants engaged in two VR applications while encountering a range of interventions from the physical world. The results show that users’ VR presence tends to be better preserved when interacting with the physical world with the support of the RealityLens interface.
710,766
Title: AUIT – the Adaptive User Interfaces Toolkit for Designing XR Applications Abstract: ABSTRACTAdaptive user interfaces can improve experiences in Extended Reality (XR) applications by adapting interface elements according to the user’s context. Although extensive work explores different adaptation policies, XR creators often struggle with their implementation, which involves laborious manual scripting. The few available tools are underdeveloped for realistic XR settings where it is often necessary to consider conflicting aspects that affect an adaptation. We fill this gap by presenting AUIT, a toolkit that facilitates the design of optimization-based adaptation policies. AUIT allows creators to flexibly combine policies that address common objectives in XR applications, such as element reachability, visibility, and consistency. Instead of using rules or scripts, specifying adaptation policies via adaptation objectives simplifies the design process and enables creative exploration of adaptations. After creators decide which adaptation objectives to use, a multi-objective solver finds appropriate adaptations in real-time. A study showed that AUIT allowed creators of XR applications to quickly and easily create high-quality adaptations.
710,767
Title: Muscle Synergies Learning with Electrical Muscle Stimulation for Playing the Piano Abstract: ABSTRACT When playing scales on the piano, playing all notes evenly is a basic technique to improve the quality of music. However, it is difficult for beginners to do this because they need to achieve appropriate muscle synergies of the forearm and shoulder muscles, i.e., pressing keys as well as sliding their hands sideways. In this paper, we propose a system using electrical muscle stimulation (EMS) to teach beginners how to improve their muscle synergies while playing scales. We focus on “thumb-under” method and assist with it by applying EMS to the deltoid muscle. We conducted a user study to investigate whether our EMS-based system can help beginners learn new muscle synergies in playing ascending scales. We divided the participants into two groups: an experimental group that practiced with EMS and a control group that practiced without EMS. The results showed that practicing with EMS was more effective in improving the evenness of scales than without EMS and that the muscle synergies changed after practicing.
710,768
Title: ELAXO : Rendering Versatile Resistive Force Feedback for Fingers Grasping and Twisting Abstract: ABSTRACT Haptic feedback not only enhances immersion in virtual reality (VR) but also delivers experts’ haptic sensation tips in VR training, e.g., properly clamping a tenon and mortise joint or tightening a screw in the assembly of VR factory training, which could even improve the training performance. However, various and complicated manipulation is in different scenarios. Although haptic feedback of virtual objects’ shape, stiffness or resistive force in pressing or grasping is achieved by previous research, rotational resistive force when twisting or turning virtual objects is seldom discussed or explored, especially for a wearable device. Therefore, we propose a wearable device, ELAXO, to integrate continuous resistive force and continuous rotational resistive force with or without resilience in grasping and twisting, respectively. ELAXO is an exoskeleton with rings, mechanical brakes and elastic bands. The brakes achieve shape rendering and switch between with and without resilience modes for the resistive force. The detachable and rotatable rings and elastic bands render continuous resistive force in grasping and twisting. We conducted a just noticeable difference (JND) study to understand users’ distinguishability in the four conditions, resistive force and rotational resistive force with and without resilience, separately. A VR study was then performed to verify that the versatile resistive force feedback from ELAXO enhances the VR experiences.
710,769
Title: TipTrap: A Co-located Direct Manipulation Technique for Acoustically Levitated Content. Abstract: ABSTRACT Acoustic levitation has emerged as a promising approach for mid-air displays, by using multiple levitated particles as 3D voxels, cloth and thread props, or high-speed tracer particles, under the promise of creating 3D displays that users can see, hear and feel with their bare eyes, ears and hands. However, interaction with this mid-air content always occurred at a distance, since external objects in the display volume (e.g. user’s hands) can disturb the acoustic fields and make the particles fall. This paper proposes TipTrap, a co-located direct manipulation technique for acoustically levitated particles. TipTrap leverages the reflection of ultrasound on the users’ skin and employs a closed-loop system to create functional acoustic traps 2.1 mm below the fingertips, and addresses its 3 basic stages: selection, manipulation and deselection. We use Finite-Differences Time Domain (FDTD) simulations to explain the principles enabling TipTrap, and explore how finger reflections and user strategies influence the quality of the traps (e.g. approaching direction, orientation and tracking errors), and use these results to design our technique. We then implement the technique, characterizing its performance with a robotic hand setup and finish with an exploration of the ability of TipTrap to manipulate different types of levitated content.
710,770
Title: DigituSync: A Dual-User Passive Exoskeleton Glove That Adaptively Shares Hand Gestures Abstract: ABSTRACTWe engineered DigituSync, a passive-exoskeleton that physically links two hands together, enabling two users to adaptively transmit finger movements in real-time. It uses multiple four-bar linkages to transfer both motion and force, while still preserving congruent haptic feedback. Moreover, we implemented a variable-length linkage that allows adjusting the force transmission ratio between the two users and regulates the amount of intervention, which enables users to customize their learning experience. DigituSync's benefits emerge from its passive design: unlike existing haptic devices (motor-based exoskeletons or electrical muscle stimulation), DigituSync has virtually no latency and does not require batteries/electronics to transmit or adjust movements, making it useful and safe to deploy in many settings, such as between students and teachers in a classroom. We validated DigituSync by means of technical evaluations and a user study, demonstrating that it instantly transfers finger motions and forces with the ability of adaptive force transmission, which allowed participants to feel more control over their own movements and to feel the teacher's intervention was more responsive. We also conducted two exploratory sessions with a music teacher and deaf-blind users, which allowed us to gather experiential insights from the teacher's side and explore DigituSync in applications.
710,771
Title: EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic Sensing Abstract: ABSTRACT EtherPose is a continuous hand pose tracking system employing two wrist-worn antennas, from which we measure the real-time dielectric loading resulting from different hand geometries (i.e., poses). Unlike worn camera-based methods, our RF approach is more robust to occlusion from clothing and avoids capturing potentially sensitive imagery. Through a series of simulations and empirical studies, we designed a proof-of-concept, worn implementation built around compact vector network analyzers. Sensor data is then interpreted by a machine learning backend, which outputs a fully-posed 3D hand. In a user study, we show how our system can track hand pose with a mean Euclidean joint error of 11.6 mm, even when covered in fabric. We also studied 2DOF wrist angle and micro-gesture tracking. In the future, our approach could be miniaturized and extended to include more and different types of antennas, operating at different self resonances.
710,772
Title: DeltaPen: A Device with Integrated High-Precision Translation and Rotation Sensing on Passive Surfaces Abstract: ABSTRACT We present DeltaPen, a pen device that operates on passive surfaces without the need for external tracking systems or active sensing surfaces. DeltaPen integrates two adjacent lens-less optical flow sensors at its tip, from which it reconstructs accurate directional motion as well as yaw rotation. DeltaPen also supports tilt interaction using a built-in inertial sensor. A pressure sensor and high-fidelity haptic actuator complements our pen device while retaining a compact form factor that supports mobile use on uninstrumented surfaces. We present a processing pipeline that reliably extracts fine-grained pen translations and rotations from the two optical flow sensors. To asses the accuracy of our translation and angle estimation pipeline, we conducted a technical evaluation in which we compared our approach with ground-truth measurements of participants’ pen movements during typical pen interactions. We conclude with several example applications that leverage our device’s capabilities. Taken together, we demonstrate novel input dimensions with DeltaPen that have so far only existed in systems that require active sensing surfaces or external tracking.
710,773
Title: DiscoBand: Multiview Depth-Sensing Smartwatch Strap for Hand, Body and Environment Tracking Abstract: ABSTRACT Real-time tracking of a user’s hands, arms and environment is valuable in a wide variety of HCI applications, from context awareness to virtual reality. Rather than rely on fixed and external tracking infrastructure, the most flexible and consumer-friendly approaches are mobile, self-contained, and compatible with popular device form factors (e.g., smartwatches). In this vein, we contribute DiscoBand, a thin sensing strap not exceeding 1 cm in thickness. Sensors operating so close to the skin inherently face issues with occlusion. To help overcome this, our strap uses eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud. An additional eight depth sensors image outwards from the band to track the user’s body and surroundings. In addition to evaluating arm and hand pose tracking, we also describe a series of supplemental applications powered by our band’s data, including held object recognition and environment mapping.
710,774
Title: FLEX-SDK: An Open-Source Software Development Kit for Creating Social Robots Abstract: ABSTRACT We present FLEX-SDK: an open-source software development kit that allows creating a social robot from two simple tablet screens. FLEX-SDK involves tools for designing the robot face and its facial expressions, creating screens for input/output interactions, controlling the robot through a Wizard-of-Oz interface, and scripting autonomous interactions through a simple text-based programming interface. We demonstrate how this system can be used to replicate an interaction study and we present nine case studies involving controlled experiments, observational studies, participatory design sessions, and outreach activities in which our tools were used by researchers and participants to create and interact with social robots. We discuss common observations and lessons learned from these case studies. Our work demonstrates the potential of FLEX-SDK to lower the barrier to entry for Human-Robot Interaction research.
710,775
Title: Concept-Annotated Examples for Library Comparison Abstract: ABSTRACT Programmers often rely on online resources—such as code examples, documentation, blogs, and Q&A forums—to compare similar libraries and select the one most suitable for their own tasks and contexts. However, this comparison task is often done in an ad-hoc manner, which may result in suboptimal choices. Inspired by Analogical Learning and Variation Theory, we hypothesize that rendering many concept-annotated code examples from different libraries side-by-side can help programmers (1) develop a more comprehensive understanding of the libraries’ similarities and distinctions and (2) make more robust, appropriate library selections. We designed a novel interactive interface, ParaLib, and used it as a technical probe to explore to what extent many side-by-side concepted-annotated examples can facilitate the library comparison and selection process. A within-subjects user study with 20 programmers shows that, when using ParaLib, participants made more consistent, suitable library selections and provided more comprehensive summaries of libraries’ similarities and differences.
710,776
Title: MetamorphX: An Ungrounded 3-DoF Moment Display that Changes its Physical Properties through Rotational Impedance Control Abstract: ABSTRACTHumans can estimate the properties of wielded objects (e.g., inertia and viscosity) using the force applied to the hand. We focused on this mechanism and aimed to represent the properties of wielded objects by dynamically changing the force applied to the hand. We propose MetamorphX, which uses control moment gyroscopes (CMGs) to generate ungrounded, 3-degrees of freedom moment feedback. The high-response moments obtained CMGs allow the inertia and viscosity of motion to be set to the desired values via impedance control. A technical evaluation indicated that our device can generate a moment with a 60-ms delay. The inertia and viscosity of motion were varied by 0.01 kgm2 and 0.1 Ns, respectively. Additionally, we demonstrated that our device can dynamically change the inertia and viscosity of motion through virtual reality applications.
710,777
Title: MagneShape: A Non-electrical Pin-Based Shape-Changing Display Abstract: ABSTRACT Pin-based shape-changing displays can present dynamic shape changes by actuating a number of pins. However, the use of many linear actuators to achieve this makes the electrical structure and mechanical construction of the display complicated. We propose a simple pin-based shape-changing display that outputs shape and motions without any electronic elements. Our display consists of magnetic pins in a pin housing, with a magnetic sheet underneath it. The magnetic sheet has a specific magnetic pattern on its surface, and each magnetic pin has a magnet at its lower end. The repulsive force generated between the magnetic sheet and the magnetic pin levitates the pin vertically, and the height of the pin-top varies depending on the magnetic pattern. This paper introduces the basic structure of the display and compares several fabrication methods for the magnetic pins, to highlight the applicability of this method. We have also demonstrated some applications and discussed future possibilities.
710,778
Title: Kinergy: Creating 3D Printable Motion using Embedded Kinetic Energy Abstract: ABSTRACTWe present Kinergy—an interactive design tool for creating self-propelled motion by harnessing the energy stored in 3D printable springs. To produce controllable output motions, we introduce 3D printable kinetic units, a set of parameterizable designs that encapsulate 3D printable springs, compliant locks, and transmission mechanisms for three non-periodic motions—instant translation, instant rotation, continuous translation—and four periodic motions—continuous rotation, reciprocation, oscillation, intermittent rotation. Kinergy allows the user to create motion-enabled 3D models by embedding kinetic units, customize output motion characteristics by parameterizing embedded springs and kinematic elements, control energy by operating the specialized lock, and preview the resulting motion in an interactive environment. We demonstrate the potential of our techniques via example applications from spring-loaded cars to kinetic sculptures and close with a discussion of key challenges such as geometric constraints.
710,779
Title: spaceR: Knitting Ready-Made, Tactile, and Highly Responsive Spacer-Fabric Force Sensors for Continuous Input Abstract: ABSTRACT With spaceR, we present both design and implementation of a resistive force-sensor based on a spacer fabric knit. Due to its softness and elasticity, our sensor provides an appealing haptic experience. It enables continuous input with high precision due to its innate haptic feedback and can be manufactured ready-made on a regular two-bed weft knitting machine, without requiring further post-processing steps. For our multi-component knit, we add resistive yarn to the filler material, in order to achieve a highly sensitive and responsive pressure sensing textile. Sensor resistance drops by ~90% when actuated with moderate finger pressure of 2 N, making the sensor accessible also for straightforward readout electronics. We discuss related manufacturing parameters and their effect on shape and electrical characteristics and explore design opportunities to harness visual and tactile affordances. Finally, we demonstrate several application scenarios by implementing diverse spaceR variations, including analog rocker- and four-way directional buttons, and show the possibility of mode-switching by tracking temporal data.
710,780
Title: Reconfigurable Elastic Metamaterials Abstract: ABSTRACT We present a novel design for materials that are reconfigurable by end-users. Conceptually, we propose decomposing such reconfigurable materials into (1) a generic, complex material consisting of engineered microstructures (known as metamaterials) designed to be purchased and (2) a simple configuration geometry that can be fabricated by end-users to fit their individual use cases. Specifically, in this paper we investigate reconfiguring our material’s elasticity, such that it can cover existing objects and thereby augment their material properties. Users can configure their materials by generating the configuration geometry using our interactive editor, 3D printing it using commonly available filaments (e. g., PLA), and pressing it onto the generic material for local coupling. We characterize the mechanical properties of our reconfigurable elastic metamaterial and showcase the material’s applicability as, e.g., augmentation for haptic props in virtual reality, a reconfigurable shoe sole for different activities, or a battleship-like ball game.
710,781
Title: Chatbots Facilitating Consensus-Building in Asynchronous Co-Design Abstract: ABSTRACT Consensus-building is an essential process for the success of co-design projects. To build consensus, stakeholders need to discuss conflicting needs and viewpoints, converge their ideas toward shared interests, and grow their willingness to commit to group decisions. However, managing group discussions is challenging in large co-design projects with multiple stakeholders. In this paper, we investigate the interaction design of a chatbot that can mediate consensus-building conversationally. By interacting with individual stakeholders, the chatbot collects ideas to satisfy conflicting needs and engages stakeholders to consider others’ viewpoints, without having stakeholders directly interact with each other. Results from an empirical study in an educational setting (N = 12) suggest that the approach can increase stakeholders’ commitment to group decisions and maintain the effect even on the group decisions that conflict with personal interests. We conclude that chatbots can facilitate consensus-building in small-to-medium-sized projects, but more work is needed to scale up to larger projects.
710,782
Title: BO as Assistant: Using Bayesian Optimization for Asynchronously Generating Design Suggestions Abstract: ABSTRACTMany design tasks involve parameter adjustment, and designers often struggle to find desirable parameter value combinations by manipulating sliders back and forth. For such a multi-dimensional search problem, Bayesian optimization (BO) is a promising technique because of its intelligent sampling strategy; in each iteration, BO samples the most effective points considering both exploration (i.e., prioritizing unexplored regions) and exploitation (i.e., prioritizing promising regions), enabling efficient searches. However, existing BO-based design frameworks take the initiative in the design process and thus are not flexible enough for designers to freely explore the design space using their domain knowledge. In this paper, we propose a novel design framework, BO as Assistant, which enables designers to take the initiative in the design process while also benefiting from BO’s sampling strategy. The designer can manipulate sliders as usual; the system monitors the slider manipulation to automatically estimate the design goal on the fly and then asynchronously provides unexplored-yet-promising suggestions using BO’s sampling strategy. The designer can choose to use the suggestions at any time. This framework uses a novel technique to automatically extract the necessary information to run BO by observing slider manipulation without requesting additional inputs. Our framework is domain-agnostic, demonstrated by applying it to photo color enhancement, 3D shape design for personal fabrication, and procedural material design in computer graphics.
710,783
Title: We-toon: A Communication Support System between Writers and Artists in Collaborative Webtoon Sketch Revision Abstract: ABSTRACT We present a communication support system, namely We-toon, that can bridge the webtoon writers and artists during sketch revision (i.e., character design and draft revision). In the highly iterative design process between the webtoon writers and artists, writers often have difficulties in precisely articulating their feedback on sketches owing to their lack of drawing proficiency. This drawback makes the writers rely on textual descriptions and reference images found using search engines, leading to indirect and inefficient communications. Inspired by a formative study, we designed We-toon to help writers revise webtoon sketches and effectively communicate with artists. Through a GAN-based image synthesis and manipulation, We-toon can interactively generate diverse reference images and synthesize them locally on any user-provided image. Our user study with 24 professional webtoon authors demonstrated that We-toon outperforms the traditional methods in terms of communication effectiveness and the writers’ satisfaction level related to the revised image.
710,784
Title: Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality Glasses Abstract: ABSTRACT Augmented Reality has traditionally been used to display digital overlays in real environments. Many AR applications such as remote collaboration, picking tasks, or navigation require highlighting physical objects for selection or guidance. These highlights use graphical cues such as outlines and arrows. Whilst effective, they greatly contribute to visual clutter, possibly occlude scene elements, and can be problematic for long-term use. Substituting those overlays, we explore saliency modulation to accentuate objects in the real environment to guide the user’s gaze. Instead of manipulating video streams, like done in perception and cognition research, we investigate saliency modulation of the real world using optical-see-through head-mounted displays. This is a new challenge, since we do not have full control over the view of the real environment. In this work we provide our specific solution to this challenge, including built prototypes and their evaluation.
710,785
Title: Color-to-Depth Mappings as Depth Cues in Virtual Reality Abstract: ABSTRACT Despite significant improvements to Virtual Reality (VR) technologies, most VR displays are fixed focus and depth perception is still a key issue that limits the user experience and the interaction performance. To supplement humans’ inherent depth cues (e.g., retinal blur, motion parallax), we investigate users’ perceptual mappings of distance to virtual objects’ appearance to generate visual cues aimed to enhance depth perception. As a first step, we explore color-to-depth mappings for virtual objects so that their appearance differs in saturation and value to reflect their distance. Through a series of controlled experiments, we elicit and analyze users’ strategies of mapping a virtual object’s hue, saturation, value and a combination of saturation and value to its depth. Based on the collected data, we implement a computational model that generates color-to-depth mappings fulfilling adjustable requirements on confusion probability, number of depth levels, and consistent saturation/value changing tendency. We demonstrate the effectiveness of color-to-depth mappings in a 3D sketching task, showing that compared to single-colored targets and strokes, with our mappings, the users were more confident in the accuracy without extra cognitive load and reduced the perceived depth error by 60.8%. We also implement four VR applications and demonstrate how our color cues can benefit the user experience and interaction performance in VR.
710,786
Title: Exploring Sensory Conflict Effect Due to Upright Redirection While Using VR in Reclining & Lying Positions Abstract: ABSTRACTWhen users use Virtual Reality (VR) in nontraditional postures, such as while reclining or lying in relaxed positions, their views lean upwards and need to be corrected, to make sure they see upright contents and perceive the interactions as if they were standing. Such upright redirection is excepted to cause visual-vestibular-proprioceptive conflict, affecting users’ internal perceptions (e.g., body ownership, presence, simulator sickness) and external perceptions (e.g., egocentric space perception) in VR. Different body reclining angles may affect vestibular sensitivity and lead to the dynamic weighting of multi-sensory signals in the sensory integration. In the paper, we investigated the impact of upright redirection on users’ perceptions, with users’ physical bodies tilted at various angles backward and views upright redirected accordingly. The results showed that upright redirection led to simulator sickness, confused self-awareness, weak upright illusion, and increased space perception deviations to various extents when users are at different reclining positions, and the situations were the worst at the 45° conditions. Based on these results, we designed some illusion-based and sensory-based methods, that were shown effective in reducing the impact of sensory conflict through preliminary evaluations.
710,787
Title: ShrinkCells: Localized and Sequential Shape-Changing Actuation of 3D-Printed Objects via Selective Heating Abstract: ABSTRACTThe unique behaviors of thermoplastic polymers enable shape-changing interfaces made of 3D printed objects that do not require complex electronics integration. While existing techniques rely on external trigger, such as heat, applied globally on a 3D printed object initiating all at once the shape-changing response (e.g., hot water, heat gun, oven), independent control of multiple parts of the object becomes nearly impossible. We introduce ShrinkCells, a set of shape-changing actuators that enables localized heat to shrink or bend, through combining the properties of two materials — conductive PLA is used to generate localized heat which selectively triggers the shrinking of a Shape Memory Polymer. The unique benefit of ShrinkCells is their capability of triggering simultaneous or sequential shape transformations for different geometries using a single power supply. This results in 3D printed rigid structures that actuate in sequence, avoiding self-collisions when unfolding. We contribute to the body of literature on 4D fabrication by a systematic investigation of selective heating with two different materials, the design and evaluation of the ShrinkCells shape-changing primitives, and applications demonstrating the usage of these actuators.
710,788
Title: SenSequins: Smart Textile Using 3D Printed Conductive Sequins Abstract: ABSTRACT In this research, we used traditional sequin embroidery as the basis and a 3D printer to expand the design space of sequin materials and structures, by developing a new 2.5D smart conductive sequin textile with multiple sensing and interactions as well as providing users with a customizing system for automated design and manufacturing. Through 3D printing, we have developed a variety of 3D sequins. We used each sequin as an individual design unit to realize various circuit designs and sensing functions by adjusting the design primitives such as conductivity, shape, and arrangement. We also designed applications such as motion sensing of body movements, and posture detection of the ankle. In addition, we surveyed user requirements through user testing to optimize the design space. This paper describes the design space, design software, automation, application, and user study of various smart sequin textiles.
710,789
Title: interiqr: Unobtrusive Edible Tags using Food 3D Printing Abstract: ABSTRACTWe present interiqr, a method that utilizes the infill parameter in the 3D printing process to embed information inside the food that is difficult to recognize with the human eye. Our key idea is to utilize the air space or secondary materials to generate a specific pattern inside the food without changing the model geometry. As a result, our method exploits the patterns that appear as hidden edible tags to store the data and simultaneously adds them to a 3D printing pipeline. Our contribution also includes the framework that connects the user with a data-embedding interface through the food 3D printing process, and the decoding system allows the user to decode the information inside the 3D printed food through backlight illumination and a simple image processing technique. Finally, we evaluate the usability of our method under different settings and demonstrate our method through the example application scenarios.
710,790
Title: Automated Filament Inking for Multi-color FFF 3D Printing Abstract: ABSTRACT We propose a novel system for low-cost multi-color Fused Filament Fabrication (FFF) 3D printing, allowing for the creation of customizable colored filament using a pre-processing approach. We developed an open-source device to automatically ink filament using permanent markers. Our device can be built using 3D printed parts and off-the-shelf electronics. An accompanying web-based interface allows users to view GCODE toolpaths for a multi-color print and quickly generate filament color profiles. Taking a pre-processing approach makes this system compatible with the majority of desktop 3D printers on the market, as the processed filament behaves no differently from conventional filaments. Furthermore, inked filaments can be produced economically, reducing the need for excessive purchasing of material to expand color options. We demonstrate the efficacy of our system by fabricating monochromatic objects, objects with gradient colors, objects with bi-directional properties, as well as multi-color objects with up to four colors in a single print.
710,791
Title: Photographic Lighting Design with Photographer-in-the-Loop Bayesian Optimization Abstract: ABSTRACTIt is important for photographers to have the best possible lighting configuration at the time of shooting; otherwise, they need post-processing on images, which may cause artifacts and deterioration. Thus, photographers often struggle to find the best possible lighting configuration by manipulating lighting devices, including light sources and modifiers, in a trial-and-error manner. In this paper, we propose a novel computational framework to support photographers. This framework assumes that every lighting device is programmable; that is, its adjustable parameters (e.g., orientation, intensity, and color temperature) can be set using a program. Using our framework, photographers do not need to learn how the parameter values affect the resulting lighting, and even do not need to determine the strategy of the trial-and-error process; instead, photographers need only concentrate on evaluating which lighting configuration is more desirable among options suggested by the system. The framework is enabled by our novel photographer-in-the-loop Bayesian optimization, which is sample-efficient (i.e., the number of required evaluation steps is small) and which can also be guided by providing a rough painting of the desired lighting configuration if any. We demonstrate how the framework works in both simulated virtual environments and a physical environment, suggesting that it could find pleasing lighting configurations quickly in around 10 iterations. Our user study suggests that the framework enables the photographer to concentrate on the look of captured images rather than the parameters, compared with the traditional manual lighting workflow.
710,792
Title: Breathing Life Into Biomechanical User Models Abstract: ABSTRACT Forward biomechanical simulation in HCI holds great promise as a tool for evaluation, design, and engineering of user interfaces. Although reinforcement learning (RL) has been used to simulate biomechanics in interaction, prior work has relied on unrealistic assumptions about the control problem involved, which limits the plausibility of emerging policies. These assumptions include direct torque actuation as opposed to muscle-based control; direct, privileged access to the external environment, instead of imperfect sensory observations; and lack of interaction with physical input devices. In this paper, we present a new approach for learning muscle-actuated control policies based on perceptual feedback in interaction tasks with physical input devices. This allows modelling of more realistic interaction tasks with cognitively plausible visuomotor control. We show that our simulated user model successfully learns a variety of tasks representing different interaction methods, and that the model exhibits characteristic movement regularities observed in studies of pointing. We provide an open-source implementation which can be extended with further biomechanical models, perception models, and interactive environments.
710,793
Title: INTENT: Interactive Tensor Transformation Synthesis Abstract: ABSTRACT There is a growing interest in adopting Deep Learning (DL) given its superior performance in many domains. However, modern DL frameworks such as TensorFlow often come with a steep learning curve. In this work, we propose INTENT, an interactive system that infers user intent and generates corresponding TensorFlow code on behalf of users. INTENT helps users understand and validate the semantics of generated code by rendering individual tensor transformation steps with intermediate results and element-wise data provenance. Users can further guide INTENT by marking certain TensorFlow operators as desired or undesired, or directly manipulating the generated code. A within-subjects user study with 18 participants shows that users can finish programming tasks in TensorFlow more successfully with only half the time, compared with a variant of INTENT that has no interaction or visualization support.
710,794
Title: Bayesian Hierarchical Pointing Models Abstract: ABSTRACT Bayesian hierarchical models are probabilistic models that have hierarchical structures and use Bayesian methods for inferences. In this paper, we extend Fitts’ law to be a Bayesian hierarchical pointing model and compare it with the typical pooled pointing models (i.e., treating all observations as the same pool), and the individual pointing models (i.e., building an individual model for each user separately). The Bayesian hierarchical pointing models outperform pooled and individual pointing models in predicting the distribution and the mean of pointing movement time, especially when the training data are sparse. Our investigation also shows that both noninformative and weakly informative priors are adequate for modeling pointing actions, although the weakly informative prior performs slightly better than the noninformative prior when the training data size is small. Overall, we conclude that the expected advantages of Bayesian hierarchical models hold for the pointing tasks. Bayesian hierarchical modeling should be adopted a more principled and effective approach of building pointing models than the current common practices in HCI which use pooled or individual models.
710,795
Title: Summarizing Sets of Related ML-Driven Recommendations for Improving File Management in Cloud Storage Abstract: ABSTRACTPersonal cloud storage systems increasingly offer recommendations to help users retrieve or manage files of interest. For example, Google Drive’s Quick Access predicts and surfaces files likely to be accessed. However, when multiple, related recommendations are made, interfaces typically present recommended files and any accompanying explanations individually, burdening users. To improve the usability of ML-driven personal information management systems, we propose a new method for summarizing related file-management recommendations. We generate succinct summaries of groups of related files being recommended. Summaries reference the files’ shared characteristics. Through a within-subjects online study in which participants received recommendations for groups of files in their own Google Drive, we compare our summaries to baselines like visualizing a decision tree model or simply listing the files in a group. Compared to the baselines, participants expressed greater understanding and confidence in accepting recommendations when shown our novel recommendation summaries.
710,796
Title: InterWeave: Presenting Search Suggestions in Context Scaffolds Information Search and Synthesis Abstract: ABSTRACTWeb search is increasingly used to satisfy complex, exploratory information goals. Exploring and synthesizing information into knowledge can be slow and cognitively demanding due to a disconnect between search tools and sense-making workspaces. Our work explores how we might integrate contextual query suggestions within a person’s sensemaking environment. We developed InterWeave a prototype that leverages a human wizard to generate contextual search guidance and to place the suggestions within the emergent structure of a searchers’ notes. To investigate how weaving suggestions into the sensemaking workspace affects a user’s search and sensemaking behavior, we ran a between-subjects study (n=34) where we compare InterWeave’s in context placement with a conventional list of query suggestions. InterWeave’s approach not only promoted active searching, information gathering and knowledge discovery, but also helped participants keep track of new suggestions and connect newly discovered information to existing knowledge, in comparison to presenting suggestions as a separate list. These results point to directions for future work to interweave contextual and natural search guidance into everyday work.
710,797
Title: Characterizing and orchestrating VM reservation in geo-distributed clouds to improve the resource efficiency Abstract: ABSTRACTCloud providers often build a geo-distributed cloud from multiple datacenters in different geographic regions, to serve tenants at different locations. The tenants that run large scale applications often reserve resources based on their peak loads in the region close to the end users to handle the ever changing application load, wasting a large amount of resources. We therefore characterize the VM request patterns of the top tenants in our production public geo-distributed cloud, and open-source the VM request traces in four months from the top 20 tenants of our cloud. The characterization shows that the resource usage of large tenants has various temporal and spatial patterns on the dimensions of time series, regions, and VM types, and has the potential of peak shaving between different tenants to further reduce the resource reservation cost. Based on the findings, we propose a resource reservation and VM request scheduling scheme named ROS to minimize the resource reservation cost while satisfying the VM allocation requests. Our experiments show that ROS reduces the overall deployment cost by 75.4% and the reservation resources by 60.1%, compared to the tenant-specified reservation strategy.
710,798
Title: Owl: performance-aware scheduling for resource-efficient function-as-a-service cloud Abstract: ABSTRACTThis work documents our experience of improving the scheduler in Alibaba Function Compute, a public FaaS platform. It commences with our observation that memory and CPU are under-utilized in most FaaS sandboxes. A natural solution is to overcommit VM resources when allocating sandboxes, whereas the ensuing contention may cause performance degradation and compromise user experience. To complicate matters, the degradation in FaaS can arise from external factors, such as failed dependencies of user functions. We design Owl to achieve both high utilization and performance stability. It introduces a customizable rule system for users to specify their toleration of degradation, and overcommits resources with a dual approach. (1) For less-invoked functions, it allocates resources to the sandboxes with usage-based heuristic, keeps monitoring their performance, and remedies any detected degradation. It differentiates whether a degraded sandbox is affected externally by separating a contention-free environment and migrating the affected sandbox into there as a comparison baseline. (2) For frequently-invoked functions, Owl profiles the interference patterns among collocated sandboxes and place the sandboxes under the guidance of profiles. The collocation profiling is designed to tackle the constraints that profiling has to be conducted in production. Owl further consolidates idle sandboxes to reduce resource waste. We prototype Owl in our production system and implement a representative benchmark suite to evaluate it. The results demonstrate that the prototype could reduce VM cost by 43.80% and effectively mitigate latency degradation, with negligible overhead incurred.
710,799
Title: Cloud-native workflow scheduling using a hybrid priority rule and dynamic task parallelism Abstract: ABSTRACTDemand for efficient cloud-native workflow scheduling is growing as many data science workloads are composed of several tasks with dependencies. As container technology becomes more prevalent in cloud communities, containerized workflow orchestration tools are introduced and become standard for scheduling workflows. However, current schedulers use simple heuristics and rely on the user's choice on priority and parallelism level of tasks without accounting for workflow-specific information. We introduce a workflow-aware scheduling algorithm that uses workflow information for scheduling tasks, without user input, with an objective of improving resource utilization and minimizing weighted workflow completion time, duration multiplied by user specific workflow priority. Our scheduler comprises of two strategies, a hybrid priority rule inspired by production planning ideas, and a task splitting rule based on a convex task processing time curve for the parallelism level. Using simulation, we demonstrate that our algorithm (1) produces an efficient balance of weighted workflow completion time and resource utilization and (2) outperforms deterministic parallelism.
710,800
Title: Network resource management as a database problem Abstract: ABSTRACTNetwork resource management, or how bandwidth is allocated to flows, tenants, or applications, is a challenging problem. In this paper, we call for using the database abstraction for network resource management. A database provides simple constructs for supporting complex resource management tasks, such as transactions to support bandwidth reservations on multiple links, virtual tables for restricting the view of tenants in a cloud environment, and many others. To highlight the opportunities and challenges in this space, we present a research agenda around new abstractions and policy languages, the necessary data plane support, and potential for domain specific replication and sharding strategies for the resource management database.
710,801
Title: ESCHER: expressive scheduling with ephemeral resources Abstract: ABSTRACTAs distributed applications become increasingly complex, so do their scheduling requirements. This development calls for cluster schedulers that are not only general, but also evolvable. Unfortunately, most existing cluster schedulers are not evolvable: when confronted with new requirements, they need major rewrites to support these requirements. Examples include gang-scheduling support in Kubernetes [6, 39] or task-affinity in Spark [39]. Some cluster schedulers [14, 30] expose physical resources to applications to address this. While these approaches are evolvable, they push the burden of implementing scheduling mechanisms in addition to the policies entirely to the application. ESCHER is a cluster scheduler design that achieves both evolvability and application-level simplicity. ESCHER uses an abstraction exposed by several recent frameworks (which we call ephemeral resources) that lets the application express scheduling constraints as resource requirements. These requirements are then satisfied by a simple mechanism matching resource demands to available resources. We implement ESCHER on Kubernetes and Ray, and show that this abstraction can be used to express common policies offered by monolithic schedulers while allowing applications to easily create new custom policies hitherto unsupported.
710,802
Title: Demeter: QoS-aware CPU scheduling to reduce power consumption of multiple black-box workloads Abstract: ABSTRACTEnergy consumption in cloud data centers has become an increasingly important contributor to greenhouse gas emissions and operation costs. To reduce energy-related costs and improve environmental sustainability, most modern data centers consolidate Virtual Machine (VM) workloads belonging to different application classes, some being latency-critical (LC) and others being more tolerant to performance changes, known as best-effort (BE). However, in public cloud scenarios, the real classes of applications are often opaque to data center operators. The heterogeneous applications from different cloud tenants are usually consolidated onto the same hosts to improve energy efficiency, but it is not trivial to guarantee decent performance isolation among colocated workloads. We tackle the above challenges by introducing Demeter, a QoS-aware power management controller for heterogeneous black-box workloads in public clouds. Demeter is designed to work without offline profiling or prior knowledge about black-box workloads. Through the correlation analysis between network throughput and CPU resource utilization, Demeter automatically classifies black-box workloads as either LC or BE. By provisioning differentiated CPU management strategies (including dynamic core allocation and frequency scaling) to LC and BE workloads, Demeter achieves considerable power savings together with a minimum impact on the performance of all workloads. We discuss the design and implementation of Demeter in this work, and conduct extensive experimental evaluations to reveal its effectiveness. Our results show that Demeter not only meets the performance demand of all workloads, but also responds quickly to dynamic load changes in our cloud environment. In addition, Demeter saves an average of 10.6% power consumption than state of the art mechanisms.
710,803
Title: DeepScaling: microservices autoscaling for stable CPU utilization in large scale cloud systems Abstract: ABSTRACTCloud service providers conservatively provision excessive resources to ensure service level objectives (SLOs) are met. They often set lower CPU utilization targets to ensure service quality is not degraded, even when the workload varies significantly. Not only does this potentially waste resources, but it can also consume excessive power in large-scale cloud deployments. This paper aims to minimize resource costs while ensuring SLO requirements are met in a dynamically varying, large-scale production microservice environment. We propose DeepScaling, which introduces three innovative components to adaptively refine the target CPU utilization to a level that is maintained at a stable value to meet SLO constraints while using minimum resources. First, DeepScaling forecasts the workload for each service using a Spatio-temporal Graph Neural Network. Second, DeepScaling estimates the CPU utilization by mapping the workload intensity to an estimated CPU utilization with a Deep Neural Network, while taking into account multiple factors in the cloud environment (e.g., periodic tasks and traffic). Third, DeepScaling generates an autoscaling policy for each service based on an improved Deep Q Network (DQN). The adaptive autoscaling policy updates the target CPU utilization to be a maximum, stable value, while ensuring SLOs is not violated. We compare DeepScaling with state-of-the-art autoscaling approaches in the large-scale production cloud environment of the Ant Group. It shows that DeepScaling outperforms other approaches both in terms of maintaining stable service performance, and saving resources, by a significant margin. The deployment of DeepScaling in Ant Group's real production environment with 135 microservices saves the provisioning of over 30,000 CPU cores per day, on average.
710,804
Title: Arax: a runtime framework for decoupling applications from heterogeneous accelerators Abstract: ABSTRACTToday, using multiple heterogeneous accelerators efficiently from applications and high-level frameworks, such as Tensor-Flow and Caffe, poses significant challenges in three respects: (a) sharing accelerators, (b) allocating available resources elastically during application execution, and (c) reducing the required programming effort. In this paper, we present Arax, a runtime system that decouples applications from heterogeneous accelerators within a server. First, Arax maps application tasks dynamically to available resources, managing all required task state, memory allocations, and task dependencies. As a result, Arax can share accelerators across applications in a server and adjust the resources used by each application as load fluctuates over time. Additionally, Arax offers a simple API and includes Autotalk, a stub generator that automatically generates stub libraries for applications already written for specific accelerator types, such as NVIDIA GPUs. Consequently, Arax applications are written once without considering physical details, including the number and type of accelerators. Our results show that applications, such as Caffe, TensorFlow, and Rodinia, can run using Arax with minimum effort and low overhead compared to native execution, about 12% (geometric mean). Arax supports efficient accelerator sharing, by offering up to 20% improved execution times compared to NVIDIA MPS, which supports NVIDIA GPUs only. Arax can transparently provide elasticity, decreasing total application turn-around time by up to 2X compared to native execution without elasticity support.
710,805
Title: Accelerating range queries of primary and secondary indices for key-value separation Abstract: ABSTRACTPrimary and secondary indices in LSM-tree-based key-value (KV) stores play significant roles for real-world applications, but they suffer severe I/O amplification due to compaction operations. Prior works show that KV separation can mitigate the I/O amplification under various workloads for either primary or secondary indices. However, range queries of primary and secondary indices only achieve suboptimal efficiency for two reasons: (1) KV separation improves insert/update performance by sacrificing the performance of range queries, (2) range queries of primary and secondary indices may conflict with each other. We observe that it can maximize the range query performance by maintaining purely sequential reads(i.e., strict sequentiality), on SSD devices. Moreover, range queries can be improved even in loose sequentiality. Based on these observations, we propose RISE, aiming to improve the Range query performance of prImary and SEcondary indices by leveraging the strict and loose sequentiality of SSD devices. RISE follows the basic design of KV separation and divides the value log into multiple groups. First, to achieve loose sequentiality, it adopts a key-range data grouping policy to bound the key range of primary indices but relaxes the internal key order in each group; Second, RISE proposes a co-location garbage collection(GC) policy to maintain strict sequentiality for the secondary index; Third, RISE employs a parallel parsing policy to accelerate the parsing process of secondary indices. We implement RISE and the decoupled secondary index of SineKV on top of WiscKey and HashKV for comparison. Evaluations show that RISE can outperform the range query performance of WiscKey and HashKV by 21.3% and 23% for the primary index and 29.8% and 31% for the secondary index. Besides, RISE can provide reasonable update performance and accelerate the value parsing phase of GC by 17.9%.
710,806
Title: Workload consolidation in alibaba clusters: the good, the bad, and the ugly Abstract: ABSTRACTWeb companies typically run latency-critical long-running services and resource-intensive, throughput-hungry batch jobs in a shared cluster for improved utilization and reduced cost. Despite many recent studies on workload consolidation, the production practice remains largely unknown. This paper describes our efforts to efficiently consolidate the two types of workloads in Alibaba clusters to support the company's e-commerce businesses. At the cluster level, the host and GPU memory are the bottleneck resources that limit the scale of consolidation. Our system proactively reclaims the idle host memory pages of service jobs and dynamically relinquishes their unused host and GPU memory following the predictable diurnal pattern of user traffic, a technique termed tidal scaling. Our system further performs node-level micro-management to ensure that the increased workload consolidation does not result in harmful resource contention. We briefly share our experience in handling the surging traffic with flash-crowd customers during the seasonal shopping festivals (e.g., November 11) using these "good" practices. We also discuss the limitations of our current solution (the "bad") and some practical engineering constraints (the "ugly") that make many prior research solutions inapplicable to our system.
710,807
Title: KOLE: breaking the scalability barrier for managing far edge nodes in cloud Abstract: ABSTRACTIn edge computing, the trend is moving towards leveraging cloud native technologies and platforms such as con-tainerization and Kubernetes to manage edge applications to improve operation efficiency. Unfortunately, the supported number of nodes per cluster is only several thousand in Kubernetes which is much less than what typical far edge use cases require. In this paper, we propose KOLE, a framework that extends the upstream Kubernetes for supporting a large number of far edge nodes. It replaces the existing apiserver-to-node communication mechanism in Kubernetes with a MQTT messaging system. The MQTT broker completely offloads the overhead of keeping numerous HTTP connections for nodes in the apiserver. In KOLE, we avoid creating numerous individual objects in the apiserver by maintaining them in a cloud state cache. The cache is snapshotted periodically for disaster recovery. Overall, KOLE achieves outstanding scalability by sacrificing the manageability of having individual objects, which we believe is a reasonable trade-off for far edge use cases. The experiment results show that KOLE is scalable and can support up to one million nodes.
710,808
Title: Mining microservice design patterns Abstract: ABSTRACTBuilding microservices based on design patterns is common practice. Due to the scale and dynamic nature of these applications, engineers usually only have an incomplete mental model of the system. We have developed a methodology that identify instances of well known patterns such as caching or fallbacks by analyzing traces of executions. This is in contrast with most prior work that analyzes source code to mine design patterns. Our preliminary results identifying instances of patterns of interest across several different applications is promising and we discuss the different directions we can explore in this space.
710,809
Title: GHive: accelerating analytical query processing in apache hive via CPU-GPU heterogeneous computing Abstract: ABSTRACTAs a popular distributed data warehouse system, Apache Hive has been widely used for big data analytics in many organizations. Meanwhile, exploiting the massive parallelism of GPU to accelerate online analytical processing (OLAP) has been extensively explored in the database community. In this paper, we present GHive, which enhances CPU-based Hive via CPU-GPU heterogeneous computing. GHive is designed for the business intelligence applications and provides the same API as Hive for compatibility. To run SQL queries jointly on both CPU and GPU, GHive comes with three key techniques: (i) a novel data model gTable, which is column-based and enables efficient data movement between CPU memory and GPU memory; (ii) a GPU-based operator library Panda, which provides a complete set of SQL operators with extensively optimized GPU implementations; (iii) a hardware-aware MapReduce job placement scheme, which puts jobs judiciously on either GPU or CPU via a cost-based approach. In the experiments, we observe that GHive outperforms Hive in both query processing speed and operating expense on the Star Schema Benchmark (SSB).
710,810
Title: Elastic cloud services: scaling snowflake's control plane Abstract: ABSTRACTSnowflake's "Data Cloud", provided as Software-as-a-Service (SaaS), enables data storage, processing, and analytic solutions in a performant, easy to use, and flexible manner. Although cloud service providers provide the foundational infrastructure to run and scale a variety of workloads, operating Snowflake on cloud infrastructure presents interesting challenges. Customers expect Snowflake to be available at all times and to run their workloads with high performance. Behind the scenes, the software that runs customer workloads needs to be serviced and managed. Additionally, failures in individual components such as Virtual Machines (VM) need to be handled without disrupting running workloads. As a result, lifecycle management of compute artifacts, their scheduling and placement, software rollout (and rollback), replication, failure detection, automatic scaling, and load balancing become extremely important. In this paper, we describe the design and operation of Snowflake's Elastic Cloud Services (ECS) layer that manages cloud resources at global scale to meet the needs of the Snowflake Data Cloud. It provides the control plane to enable elasticity, availability, fault tolerance and efficient execution of customer workloads. ECS runs on multiple cloud service providers and provides capabilities such as cluster management, safe code rollout and rollback, management of pre-started pools of running VMs, horizontal and vertical autoscaling, throttling of incoming requests, VM placement, load-balancing across availability zones and cross-cloud and cross-region replication. We showcase the effect of these capabilities through empirical results on systems that execute millions of queries over petabytes of data on a daily basis.
710,811
Title: Writes hurt: lessons in cache design for optane NVRAM Abstract: ABSTRACTIntel® Optane™ DC Persistent Memory resides on the memory bus and approaches DRAM in access latency. One avenue for its adoption is to employ it in place of persistent storage; another is to use it as a cheaper and denser extension of DRAM. In pursuit of the latter goal, we present the design of a volatile Optane NVRAM cache as a component in a storage engine underlying MongoDB. The primary innovation in our design is a new cache admission policy. We discover that on Optane NVRAM, known for its limited write throughput, the presence of writes disproportionately affects the throughput of reads, much more so than on DRAM. Therefore, an admission policy that indiscriminately admits new data (and thus generates writes), severely limits the rate of data retrieval and results in exceedingly poor performance for the cache overall. We design an admission policy that balances the rate of admission with the rate of lookups using dynamically observed characteristics of the workload. Our implementation outperforms OpenCAS (an off-the-shelf Optane-based block cache) in all cases, and Intel Memory Mode in cases where the database size exceeds the available NVRAM. Our cache is decoupled from the rest of the storage engine and uses generic metrics to guide its admission policy; this design can be easily adopted in other systems.
710,812
Title: SimLess: simulate serverless workflows and their twins and siblings in federated FaaS Abstract: ABSTRACTMany researchers migrate scientific serverless workflows or function choreographies (FCs) on Function-as-a-Service (FaaS) to benefit from its high scalability and elasticity. Unfortunately, the heterogeneity of federated FaaS hampers decisions on appropriate parameter setup to run FCs. Consequently, scientists must choose between accurate but tedious and expensive experiments or simple but cheap and less accurate simulations. Unfortunately, related works support either simulation models for serverfull workflows running on virtual machines and containers or partial FaaS models for individual serverless functions focused on execution time and neglecting various kinds of federated overheads. This paper introduces SimLess, an FC simulation framework for accurate FC simulations across multiple FaaS providers with a simple and lightweight parameter setup. Unlike the costly approaches that use machine learning over time series to predict the FC behavior, SimLess introduces two light concepts: (1) twins, representing the same function deployed with the same computing, communication, and storage resources, but in other regions of the same FaaS provider, and (2) siblings, representing the same function deployed in the same region with different computing resources. The novel SimLess FC simulation model splits the round trip time of a function into several parameters reused among twins and siblings without necessarily running them. We evaluated SimLess with two scientific FCs deployed across 18 AWS, Google, and IBM regions. SimLess simulates the cumulative overhead with an average inaccuracy of 8.9 % without significant differences between regions for learning and validation. Moreover, SimLess uses measurements of a low-concurrency FC executed in a single region to simulate a high-concurrency FC with 2,500 functions in the other areas with an inaccuracy of up to 9.75 %. Finally, SimLess reduces the parameter setup effort by 77.23 % compared to other simulation approaches.
710,813
Title: SIMPPO: a scalable and incremental online learning framework for serverless resource management Abstract: ABSTRACTServerless Function-as-a-Service (FaaS) offers improved programmability for customers, yet it is not server-"less" and comes at the cost of more complex infrastructure management (e.g., resource provisioning and scheduling) for cloud providers. To maintain service-level objectives (SLOs) and improve resource utilization efficiency, recent research has been focused on applying online learning algorithms such as reinforcement learning (RL) to manage resources. Despite the initial success of applying RL, we first show in this paper that the state-of-the-art single-agent RL algorithm (S-RL) suffers up to 4.8x higher p99 function latency degradation on multi-tenant serverless FaaS platforms compared to isolated environments and is unable to converge during training. We then design and implement a scalable and incremental multi-agent RL framework based on Proximal Policy Optimization (SIMPPO). Our experiments demonstrate that in multi-tenant environments, SIMPPO enables each RL agent to efficiently converge during training and provides online function latency performance comparable to that of S-RL trained in isolation with minor degradation (<9.2%). In addition, SIMPPO reduces the p99 function latency by 4.5x compared to S-RL in multi-tenant cases.
710,814
Title: Method overloading the circuit Abstract: ABSTRACTCircuit breakers are frequently deployed in microservice applications to improve their reliability. They achieve this by short circuiting RPC invocations issued to overloaded or failing services, thereby relieving pressure on those services and allowing them to recover. In this paper, we systematically examine the state of the art in industrial circuit breakers designs. We first present a taxonomy of existing, open-source circuit breaker designs and implementations based on a systematic mapping study. We then examine the relationship between these circuit breaker designs and application reliability. We make a clear case that incorrect application of circuit breakers to an application can hurt reliability in the process of trying to improve it. To address the deficiencies in the state of the art, we propose two new circuit breaker designs and provide guidance on how to properly structure microservice applications for the best circuit breaker use. Finally, we identify several open challenges in circuit breaker usage and design for future researchers.
710,815
Title: Cypress: input size-sensitive container provisioning and request scheduling for serverless platforms Abstract: ABSTRACTThe growing popularity of the serverless platform has seen an increase in the number and variety of applications (apps) being deployed on it. The majority of these apps process user-provided input to produce the desired results. Existing work in the area of input-sensitive profiling has empirically shown that many such apps have input size-dependent execution times which can be determined through modelling techniques. Nevertheless, existing serverless resource management frameworks are agnostic to the input size-sensitive nature of these apps. We demonstrate in this paper that this can potentially lead to container over-provisioning and/or end-to-end Service Level Objective (SLO) violations. To address this, we propose Cypress, an input size-sensitive resource management framework, that minimizes the containers provisioned for apps, while ensuring a high degree of SLO compliance. We perform an extensive evaluation of Cypress on top of a Kubernetes-managed cluster using 5 apps from the AWS Serverless Application Repository and/or Open-FaaS Function Store with real-world traces and varied input size distributions. Our experimental results show that Cypress spawns up to 66% fewer containers, thereby, improving container utilization and saving cluster-wide energy by up to 2.95X and 23%, respectively, versus state-of-the-art frameworks, while remaining highly SLO-compliant (up to 99.99%).
710,816
Title: QFaaS: accelerating and securing serverless cloud networks with QUIC Abstract: ABSTRACTServerless computing has greatly simplified cloud programming. It liberates cloud tenants from various system administration and resource management tasks, such as configuration and provisioning. Under this new cloud computing paradigm, a single monolithic application is divided into separate stateless functions, i.e., function-as-a-service (FaaS), which are then orchestrated together to support complex business logic. But there is a fundamental cost associated with this enhanced flexibility. Internal network connections between functions are now initiated frequently, to support serverless features such as agile autoscaling and function chains, raising communication latency. To alleviate this cost, current serverless providers sacrifice security for performance, keeping internal function communications unencrypted. We believe that the emerging QUIC protocol, which has secured and accelerated HTTP communications in the wide area, could proffer a solution to this challenge. We design a QUIC-based FaaS framework, called QFaaS, and implement it on the OpenFaaS platform. Our design explicitly ensures that existing serverless applications can directly benefit from QFaaS without any application code modification. Experiments on synthetic functions and real-world applications demonstrate that QFaaS can reduce communication latency for single functions and function chains by 28% and 40%, respectively, and save up to 50 ms in end-user response time.
710,817
Title: GreenDRL: managing green datacenters using deep reinforcement learning Abstract: ABSTRACTManaging datacenters to maximize efficiency and sustain-ability is a complex and challenging problem. In this work, we explore the use of deep reinforcement learning (RL) to manage "green" datacenters, bringing a robust approach for designing efficient management systems that account for specific workload, datacenter, and environmental characteristics. We design and evaluate GreenDRL, a system that combines a deep RL agent with simple heuristics to manage workload, energy consumption, and cooling in the presence of onsite generation of renewable energy to minimize brown energy consumption and cost. Our design addresses several important challenges, including adaptability, robustness, and effective learning in an environment comprising an enormous state/action space and multiple stochastic processes. Evaluation results (using simulation) show that GreenDRL is able to learn important principles such as delaying deferrable jobs to leverage variable generation of renewable (solar) energy, and avoiding the use of power-intensive cooling settings even at the expense of leaving some renewable energy unused. In an environment where a fraction of the workload is deferrable by up to 12 hours, GreenDRL can reduce grid electricity consumption for days with different solar energy generation and temperature characteristics by 32--54% compared to a FIFO baseline approach. GreenDRL also matches or outperforms a management approach that uses linear programming together with oracular future knowledge to manage workload and server energy consumption, but leaves the management of the cooling system to a separate (and independent) controller. Overall, our work shows that deep RL is a promising technique for building efficient management systems for green datacenters.
710,818
Title: See it to believe it?: the role of visualisation in systems research Abstract: ABSTRACTA common fixture of computer systems research are Visualisation-in-the-Loop Tools: tools that produce complex output data and require a human user to interpret the data visually. However, systems research frequently omits or sidelines details of the visualisation components that were necessary for the tool. In a survey of 1,274 recent systems papers we find that at least 7.7% (98) of them present a visualisation-in-the-loop tool. We also find that the majority of these publications pay no attention and give little explanation to implemented visualisations. We propose that the impact and reach of visualisation-in-the-loop systems research can be greatly enhanced when exposition is given to visualisation, and propose a concrete checklist of steps for authors to realise this opportunity.
710,819
Title: Accelerating large-scale distributed neural network training with SPMD parallelism Abstract: ABSTRACTDeep neural networks (DNNs) with trillions of parameters have emerged, e.g., Mixture-of-Experts (MoE) models. Training models of this scale requires sophisticated parallelization strategies like the newly proposed SPMD parallelism, that shards each tensor along different dimensions. A common problem using SPMD is that computation stalls during communication due to data dependencies, resulting in low GPU utilization and long training time. We present a general technique to accelerate SPMD-based DNN training by maximizing computation-communication overlap and automatic SPMD strategy search. The key idea is to duplicate the DNN model into two copies that have no dependency, and interleave their execution such that computation of one copy overlaps with communication of the other. We propose a dynamic programming algorithm to automatically identify optimized sharding strategies that minimize model training time by maximally enabling computation-communication overlap. Experiments show that our designs achieve up to 61% training speed-up as compared to existing frameworks.
710,820
Title: PGPregel: an end-to-end system for privacy-preserving graph processing in geo-distributed data centers Abstract: ABSTRACTGraph processing is a popular computing model for big data analytics. Emerging big data applications are often maintained in multiple geographically distributed (geo-distributed) data centers (DCs) to provide low-latency services to global users. Graph processing in geo-distributed DCs suffers from costly inter-DC data communications. Furthermore, due to increasing privacy concerns, geo-distribution imposes diverse, strict, and often asymmetric privacy regulations that constrain geo-distributed graph processing. Existing graph processing systems fail to address these two challenges. In this paper, we design and implement PGPregel, which is an end-to-end system that provides privacy-preserving graph processing in geo-distributed DCs with low latency and high utility. To ensure privacy, PGPregel smartly integrates Differential Privacy into graph processing systems with the help of two core techniques, namely sampling and combiners, to reduce the amount of inter-DC data transfer while preserving good accuracy of graph processing results. We implement our design in Giraph and evaluate it in real cloud DCs. Results show that PGPregel can preserve the privacy of graph data with low overhead and good accuracy.
710,821
Title: The power of prediction: microservice auto scaling via workload learning Abstract: ABSTRACTWhen deploying microservices in production clusters, it is critical to automatically scale containers to improve cluster utilization and ensure service level agreements (SLA). Although reactive scaling approaches work well for monolithic architectures, they are not necessarily suitable for microservice frameworks due to the long delay caused by complex microservice call chains. In contrast, existing proactive approaches leverage end-to-end performance prediction for scaling, but cannot effectively handle microservice multiplexing and dynamic microservice dependencies. In this paper, we present Madu, a proactive microservice auto-scaler that scales containers based on predictions for individual microservices. Madu learns workload uncertainty to handle the highly dynamic dependency between microservices. Additionally, Madu adopts OS-level metrics to optimize resource usage while maintaining good control over scaling overhead. Experiments on large-scale deployments of microservices in Alibaba clusters show that the overall prediction accuracy of Madu can reach as high as 92.3% on average, which is 13% higher than the state-of-the-art approaches. Furthermore, experiments running real-world microservice benchmarks in a local cluster of 20 servers show that Madu can reduce the overall resource usage by 1.7X compared to reactive solutions, while reducing end-to-end service latency by 50%.
710,822