text
stringlengths
70
7.94k
__index_level_0__
int64
105
711k
Title: 3rd Workshop on Social Affective Multimodal Interaction for Health (SAMIH) Abstract: ABSTRACT This workshop discusses how interactive, multimodal technology such as virtual agents can be used in social skills training for measuring and training social-affective interactions. Sensing technology now enables analyzing user’s behaviors and physiological signals. Various signal processing and machine learning methods can be used for such prediction tasks. Such social signal processing and tools can be applied to measure and reduce social stress in everyday situations, including public speaking at schools and workplaces.
711,123
Title: Second International Workshop on Deep Video Understanding Abstract: ABSTRACT This is the introduction paper to the 2nd International Workshop on Deep Video Understanding, organized at the 24th ACM International Conference on Multimodal Interaction. In recent years, a growing trend towards working on understanding videos (in particular movies) to a deeper level started to motivate researchers working in multimedia and computer vision to present new approaches and datasets to tackle this problem. This is a challenging research area which aims to develop a deep understanding of the relations which exist between different individuals and entities in movies using all available modalities such as video, audio, text and metadata. The aim of this workshop is to foster innovative research in this new direction and to provide benchmarking evaluations to advance technologies in the deep video understanding community.
711,124
Title: Multimodal Affect and Aesthetic Experience Abstract: ABSTRACT The term “aesthetic experience” corresponds to the inner state of a person exposed to the form and content of artistic objects. Quantifying and interpreting the aesthetic experience of people in different contexts can contribute towards (a) creating context and (b) better understanding people’s affective reactions to different aesthetic stimuli. Focusing on different types of artistic content, such as movies, music, literature, urban art, ancient artwork, and modern interactive technology, the goal of this workshop is to enhance the interdisciplinary collaboration among researchers coming from the following domains: affective computing, aesthetics, human-robot/computer interaction, digital archaeology and art, culture, addictive games.
711,125
Title: lifeXplore at the Lifelog Search Challenge 2022 Abstract: ABSTRACTLifelogging creates substantial data archives that are challenging to manage and search. The annual Lifelog Search Challenge (LSC) aims at improving this situation by encouraging international teams to create interactive retrieval systems for searching large lifelog databases. The LSC challenge is held as a live event co-located at the ACM International Conference on Multimedia Retrieval (ICMR), where participating teams compete against each other by solving time-based retrieval tasks. In this paper, we present an improved version of lifeXplore -- our system already participating since LSC2018. For LSC2022, we focus on improving the result presentation as well as the system's interface.
711,126
Title: Voxento 3.0: A Prototype Voice-Controlled Interactive Search Engine for Lifelog Abstract: ABSTRACTVoxento is an interactive voice-based retrieval system for lifelogs which has been redeveloped and optimised to participate in the fifth Lifelog Search Challenge LSC'22, at ACM ICMR'22. Based on the previous experience in the LSC competition and ranked in the top 4 in the last LSC'21 competition among 17 participants, we present a revised version of Voxento to address the critical points to improve the efficiency of retrieval tasks in lifelog datasets. Basically, Voxento provides a spoken interface to the lifelog data, which facilitates an expert and novice user to interact with a personal lifelog using a range of vocal commands and interactions. Briefly, we made some important improvements to support both the retrieval of content and system interaction. This latest version has been enhanced with the addition of a text-based search feature, new filters based on new metadata provided in lifelog data, rich visual information and features and enhanced speech query. Also, the data preparation tasks comprised a new function to reduce the number of non-relevant images and the latest CLIP model version used to derive features from images. The long term development of Voxento includes a lifelog retrieval that supports speech and conversation interaction with less physical actions required by users such as using a mouse. The system presented here uses a desktop computer in order to participate in the LSC'22 competition with the option to use voice interaction or standard text-based retrieval.
711,127
Title: Multimodal Interactive Lifelog Retrieval with vitrivr-VR Abstract: ABSTRACTThe multimodal nature of lifelog data poses unique challenges for analysis, indexing and interactive retrieval. To address these challenges, the Lifelog Search Challenge (LSC) is an annual evaluation campaign allowing interactive retrieval systems to explore new ideas and measure their performance against each other. This paper describes the virtual reality (VR) multimedia retrieval system vitrivr-VR, with a focus on aspects relevant to the LSC'22, especially the user interaction in VR, the formulation of typical LSC queries, and different options to explore the retrieval results in VR.
711,128
Title: E-Myscéal: Embedding-based Interactive Lifelog Retrieval System for LSC'22 Abstract: ABSTRACTDeveloping interactive lifelog retrieval systems is a growing research area. There are many international competitions for lifelog retrieval that encourage researchers to build effective systems that can address the multimodal retrieval challenge of lifelogs. The Lifelog Search Challenge (LSC) was first organised in 2018 and is currently the only interactive benchmarking evaluation for lifelog retrieval systems. Participating systems should have an accurate search engine and a user-friendly interface that can help users to retrieve relevant content. In this paper, we upgrade our previous MyScéal, which was the top performing system in LSC'20 and LSC'21, and present E-MyScéal for LSC'22, which includes a completely different search engine. Instead of using visual concepts for retrieval such as MyScéal, the new E-MyScéal employs an embedding technique that facilitates novice users who are not familiar with the concepts. Our experiments show that the new search engine can find relevant images in the first place in the ranked list, four a quarter of the LSC'21 queries (26%) by using just the first hint from the textual information need. Regarding the user interface, we still keep the simple non-faceted design as in the previous version but improve the event view browsing in order to better support novice users.
711,129
Title: vitrivr at the Lifelog Search Challenge 2022 Abstract: ABSTRACTIn this paper, we present the iteration of the multimedia retrieval system vitrivr participating at LSC 2022. vitrivr is a general-purpose retrieval system which has previously participated at LSC. We describe the system architecture and functionality, and show initial results based on the test and validation topics.
711,130
Title: Flexible Interactive Retrieval SysTem 3.0 for Visual Lifelog Exploration at LSC 2022 Abstract: ABSTRACTBuilding a retrieval system with lifelogging data is more complicated than with ordinary data due to the redundancies, blurriness, massive amount of data, various sources of information accompanying lifelogging data, and especially the ad-hoc nature of queries. The Lifelog Search Challenge (LSC) is a benchmarking challenge that encourages researchers and developers to push the boundaries in lifelog retrieval. For LSC'22, we develop FIRST 3.0, a novel and flexible system that leverages expressive cross-domain embeddings to enhance the searching process. Our system aims to adaptively capture the semantics of an image at different levels of detail. We also propose to augment our system with an external search engine to help our system with initial visual examples for unfamiliar concepts. Finally, we organize image data in hierarchical clusters based on their visual similarity and location to assist users in data exploration. Experiments show that our system is both fast and effective in handling various retrieval scenarios.
711,131
Title: LifeSeeker 4.0: An Interactive Lifelog Search Engine for LSC'22 Abstract: ABSTRACTIn this paper, we introduce LifeSeeker 4.0 - an interactive lifelog retrieval system developed for the fifth annual Lifelog Search Challenge (LSC'22). In LifeSeeker 4.0, we focus on enhancing our previous system to allow users who have little to no knowledge of underlying system functioning and lifelog data to use it with ease by employing a Contrastive Language-Image Pre-training (CLIP) model. Furthermore, we have exploited the music metadata to facilitate searches that may incorporate emotion. Event clustering is also improved in this version to increase user experience by reducing the occurrence of repeated images, and hence decreasing the search time.
711,132
Title: MEMORIA: A Memory Enhancement and MOment RetrIeval Application for LSC 2022 Abstract: ABSTRACTResearch on retrieving data and analyzing lifelogs revealed to be a very complex task, and the interdisciplinary challenges to be tackled have boosted increasing attention from the scientific community in information retrieval and lifelogging. The Lifelog Search Challenge is an international competition for lifelog retrieval in which researchers propose their approaches and compete to solve lifelog retrieval challenges and evaluate the effectiveness of their systems. In this paper, we present the MEMORIA computational tool to participate for the first time in the Lifelog Search Challenge 2022. The information retrieval is based on the search of keywords and time periods and several computer vision methods are used to process visual lifelogs, from pre-processing algorithms to feature extraction methods, in order to enrich the annotation of the lifelogs. Preliminary experimental results of the user interaction with our retrieval module are presented, confirming the effectiveness of the proposed approach and showing the most relevant functionalities of the system.
711,133
Title: Memento 2.0: An Improved Lifelog Search Engine for LSC'22 Abstract: ABSTRACTIn this paper, we present Memento 2.0, an improved version of our system which first participated in the Lifelog Search Challenge 2021. Memento 2.0 employs image-text embeddings derived from two CLIP models (ViT-L/14 and ResNet-50x64) and adopts a weighted ensemble approach to derive a combined final ranking. Our approach significantly improves the performance over the baseline LSC'21 system. We additionally make important updates to the system's user interface after analysing the shortcomings to make it more efficient and better suited to the needs of the Lifelog Search Challenge.
711,134
Title: An Introduction to Retrieval and Reminiscence from Lifelog Archives at NTCIR Abstract: ABSTRACTIn recent years, various software and hardware tools have entered the consumer market which enable users to log data about their lives on a continuous basis. Popular examples include self-tracking devices or apps such as Fitbit or Garmin that allow users to keep track of their physical activities or to monitor their biometrics. The process of gathering such multi-modal data from multiple sources is also referred to as lifelogging. Due to the constant stream of data being captured, lifelogging can result in the creation of large personal archives that are too large for manual organization. Consequently, automated approaches to handle such data are needed. However, due to privacy concerns, advances in the field have been limited by the lack of shared test collections. Aiming to promote further research on novel approaches to multi-modal personal data analytics and retrieval, we organized a comparative benchmarking exercise, Lifelog, that ran between 2015 and 2022 as part of the evaluation conference NTCIR. Several Lifelog datasets were released and participants could work on various sub-tasks to tackle different challenges related to Lifelog retrieval. In this keynote presentation, I will give an overview of these sub-tasks and reflect on lessons learned.
711,135
Title: Project Primrose: Reflective Light-Diffuser Modules for Non-Emissive Flexible Display Systems Abstract: ABSTRACT Recent advances in smart materials have enabled displays to move beyond planar surfaces into the fabric of everyday life. We propose reflective light-diffuser modules for non-emissive flexible display systems. Our system leverages reflective-backed polymer-dispersed liquid crystal (PDLC), an electroactive material commonly used in smart window applications. This low-power non-emissive material can be cut to any shape, and dynamically diffuses light. We present the design & fabrication of two exemplar artifacts, a canvas and a handbag, that use the reflective light-diffuser modules. We also describe our content authoring pipeline and interaction modalities. We hope this work inspires future designers of flexible displays.
711,136
Title: Flexel: A Modular Floor Interface for Room-Scale Tactile Sensing Abstract: ABSTRACT Human environments are physically supported by floors, which prevent people and furniture from gravitational pull. Since our body motions continuously generate vibrations and loads that propagate into the ground, measurement of these expressive signals leads to unobtrusive activity sensing. In this study, we present Flexel, a modular floor interface for room-scale tactile sensing. By paving a room with floor interfaces, our system can immediately begin to infer touch locations, track user locations, recognize foot gestures, and detect object locations. Through a series of exploratory studies, we determined the preferable hardware design that adheres to construction conventions, as well as the optimal sensor density that mediates the trade-off between cost and performance. We summarize our findings into design guidelines that are generalizable to other floor interfaces. Finally, we provide example applications for room-scale tactile sensing enabled by our Flexel system.
711,137
Title: ARDW: An Augmented Reality Workbench for Printed Circuit Board Debugging Abstract: ABSTRACTDebugging printed circuit boards (PCBs) can be a time-consuming process, requiring frequent context switching between PCB design files (schematic and layout) and the physical PCB. To assist electrical engineers in debugging PCBs, we present ARDW, an augmented reality workbench consisting of a monitor interface featuring PCB design files, a projector-augmented workspace for PCBs, tracked test probes for selection and measurement, and a connected test instrument. The system supports common debugging workflows for augmented visualization on the physical PCB as well as augmented interaction with the tracked probes. We quantitatively and qualitatively evaluate the system with 10 electrical engineers from industry and academia, finding that ARDW speeds up board navigation and provides engineers with greater confidence in debugging. We discuss practical design considerations and paths for improvement to future systems. A video demo of the system may be accessed here: https://youtu.be/RbENbf5WIfc .
711,138
Title: Phrase-Gesture Typing on Smartphones Abstract: ABSTRACTWe study phrase-gesture typing, a gesture typing method that allows users to type short phrases by swiping through all the letters of the words in a phrase using a single, continuous gesture. Unlike word-gesture typing, where text needs to be entered word by word, phrase-gesture typing enters text phrase by phrase. To demonstrate the usability of phrase-gesture typing, we implemented a prototype called PhraseSwipe. Our system is composed of a frontend interface designed specifically for typing through phrases and a backend phrase-level gesture decoder developed based on a transformer-based neural language model. Our decoder was trained using five million phrases of varying lengths of up to five words, chosen randomly from the Yelp Review Dataset. Through a user study with 12 participants, we demonstrate that participants could type using PhraseSwipe at an average speed of 34.5 WPM with a Word Error Rate of 1.1%.
711,139
Title: Notational Programming for Notebook Environments: A Case Study with Quantum Circuits Abstract: ABSTRACT We articulate a vision for computer programming that includes pen-based computing, a paradigm we term notational programming. Notational programming blurs contexts: certain typewritten variables can be referenced in handwritten notation and vice-versa. To illustrate this paradigm, we developed an extension, Notate, to computational notebooks which allows users to open drawing canvases within lines of code. As a case study, we explore quantum programming and designed a notation, Qaw, that extends quantum circuit notation with abstraction features, such as variable-sized wire bundles and recursion. Results from a usability study with novices suggest that users find our core interaction of implicit cross-context references intuitive, but suggests further improvements to debugging infrastructure, interface design, and recognition rates. Throughout, we discuss questions raised by the notational paradigm, including a shift from ‘recognition’ of notations to ‘reconfiguration’ of practices and values around programming, and from ‘sketching’ to writing and drawing, or what we call ‘notating.’
711,140
Title: Using Annotations for Sensemaking About Code Abstract: ABSTRACT Developers spend significant amounts of time finding, relating, navigating, and, more broadly, making sense of code. While sensemaking, developers must keep track of many pieces of information including the objectives of their task, the code locations of interest, their questions and hypotheses about the behavior of the code, and more. Despite this process being such an integral aspect of software development, there is little tooling support for externalizing and keeping track of developers’ information, which led us to develop Catseye – an annotation tool for lightweight notetaking about code. Catseye has advantages over traditional methods of externalizing code-related information, such as commenting, in that the annotations retain the original context of the code while not actually modifying the underlying source code, they can support richer interactions such as lightweight versioning, and they can be used as navigational aids. In our investigation of developers’ notetaking processes using Catseye, we found developers were able to successfully use annotations to support their code sensemaking when completing a debugging task.
711,141
Title: HapTag: A Compact Actuator for Rendering Push-Button Tactility on Soft Surfaces Abstract: ABSTRACTAs touch interactions become ubiquitous in the field of human computer interactions, it is critical to enrich haptic feedback to improve efficiency, accuracy, and immersive experiences. This paper presents HapTag, a thin and flexible actuator to support the integration of push button tactile renderings to daily soft surfaces. Specifically, HapTag works under the principle of hydraulically amplified electroactive actuator (HASEL) while being optimized by embedding a pressure sensing layer, and being activated with a dedicated voltage appliance in response to users’ input actions, resulting in fast response time, controllable and expressive push-button tactile rendering capabilities. HapTag is in a compact formfactor and can be attached, integrated, or embedded on various soft surfaces like cloth, leather, and rubber. Three common push button tactile patterns were adopted and implemented with HapTag. We validated the feasibility and expressiveness of HapTag by demonstrating a series of innovative applications under different circumstances.
711,142
Title: Variability of BLE Advertisement Packets Received Signal Strength and Delivery Probability in the Presence of Interferences Abstract: ABSTRACTThe Bluetooth Low Energy (BLE) advertisement messages allow estimating the location and distance between mobile devices. We analyse how the signal level varies depending on interferences, distance and the orientation of the transmitting and receiving devices. To provide a representative BLE Received Signal Strength Index (RSSI) dataset, we present a series of experimental measurements, including different receiver antenna orientations for several distances. We analyse the signal level variability as a function of distance and angle in an indoor environment. Our measurements show that the mean RSSI value equals -65.5 dBm, -78.4 dBm, -84.1 dBm and -90.8 dBm for 1m, 5m, 10m, and 16m, respectively. The standard deviation changes from 5.7 dBm through 5.9 dBm, 4.5 dBm to 3.5 dBm for 1m, 5m, 10m, and 16m, respectively. We also investigate the influence of the interference, emulating environment in which devices actively transmit BLE signals with varying intensities; on average, at every 0ms (no additional signals), 4ms and 8ms. The experiments show that from 13% to 40% fewer packets are delivered per second in the presence of interferences. Nevertheless, the RSSI characteristics remain very similar regardless of the presence of emulated traffic or not.
711,143
Title: Empirical evaluation of short-range wifi vessel-to-shore overwater communications Abstract: ABSTRACTUnmanned vehicles used in ocean science, defense operations and commercial activities collect large amounts of data that is further processed onshore. For real-time information exchange, the wireless link between the unmanned vehicle and onshore devices must be reliable. In this work, we empirically evaluate a WiFi link between an autonomous underwater vehicle on the surface and an onshore device under real-world conditions. This work allowed i) collecting a large-scale realistic dataset and ii) identifying major factors impairing communication in such scenarios. The TX-RX antenna alignment, the operation mode (manual vs automatic) and varying reflecting surface induced by AUV mobility lead to sudden changes (e.g. nulls) in the received signal strength that can be larger than 20 dB. This study provides useful insights to the design of robust vessel-to-shore short-range communications.
711,144
Title: Coverage and performance analysis of 5G non-standalone deployments Abstract: ABSTRACTFifth Generation (5G) networks are becoming the norm in the global telecommunications industry. In this paper, we present the first large-scale measurement study on commercial 5G Non Standalone (NSA) deployments in a European country. We leverage the collected dataset, which covers two Mobile Network Operators (MNOs) in Rome, Italy, to first, study network deployment and radio coverage aspects, and second, explore the performance of two use cases related to enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low Latency Communication (URLLC). Results show that 5G NSA can provide higher downlink throughput and slightly lower latency compared to Fourth Generation (4G) systems. However, performance is influenced by several factors, including propagation conditions, system configurations, and handover events, ultimately highlighting the need for further system optimization.
711,145
Title: Reinforcement learning of millimeter wave beamforming tracking over COSMOS platform Abstract: ABSTRACTCommunication over large-bandwidth millimeter wave (mmWave) spectrum bands can provide high data rate, through utilizing high-gain beamforming vectors (briefly, beams). Real-time tracking of such beams, which is needed for supporting mobile users, can be accomplished through developing machine learning (ML) models. While computer simulations were used to show the success of such ML models, experimental results are still limited. Consequently in this paper, we verify the effectiveness of mmWave beam tracking over the open-source COSMOS testbed. We particularly utilize a multi-armed bandit (MAB) scheme, which follows reinforcement learning (RL) approach. In our MAB-based beam tracking model, the beam selection is modeled as an action, while the reward of the algorithm is modeled through the link throughput. Experimental results, conducted over the 60-GHz COSMOS-based mobile platform, show that the MAB-based beam tracking learning model can achieve almost 92% throughput compared to the Genie-aided beams after a few learning samples.
711,146
Title: Wireless guard for trustworthy spectrum management Abstract: ABSTRACTARA is a first-of-its-kind wireless living lab for advanced wireless in rural regions. In ARA, users can reserve programmable wireless resources, for instance, Software Defined Radios (SDRs), and wireless spectrum to perform a wide range of of experiments. Given ARA-enabled open access to programmable wireless resources, it is important to enforce proper usage of the available spectrum, thereby ensuring no user (benign or malicious) creates any harmful interference to other experimenters or any incumbent. Therefore, we develop Wireless Guard (WG), a mechanism for wireless spectrum usage monitoring. For WG, we use two approaches for enforcing the spectrum policy: (i) reactive approach and (ii) proactive approach. In this paper, we present the hardware and software architectures of the ARA WG along with the end-to-end pipeline for managing experiments in case of deviation from the spectrum usage policy. Initial evaluations show the effectiveness of WG in enforcing spectrum usage policies in ARA.
711,147
Title: SDR-LoRa: dissecting and implementing LoRa on software-defined radios to advance experimental IoT research Abstract: ABSTRACTIn this paper, we present SDR-LoRa, a full-fledged SDR implementation of a LoRa transmitter and receiver. First, we reverse-engineer the LoRa physical layer (PHY) functionalities, including the procedures of packet modulation, demodulation, and preamble detection. Based on this analysis, we develop the first Software Defined Radio (SDR) implementation of the LoRa PHY. Furthermore, we integrate LoRa with an Automatic Repeat Request (ARQ) error detection protocol. SDR-LoRa has been validated on (i) the Colosseum wireless channel emulator; and (ii) a real testbed with USRP radios and commercial-off-the-shelf (COTS) devices. Our experimental results demonstrate that the performance of SDR-LoRa is in line with commercial LoRa systems. We pledge to share the entirety of the SDR-LoRa code.
711,148
Title: Experimental evaluation of virtual reality applications running on next-gen network scenarios with edge cloud assistance Abstract: ABSTRACTVolumetric video is an emerging key technology for immersive representation of 3D spaces and objects. Rendering volumetric video at client's end requires significant computational power which is challenging especially for mobile devices. One of the ways to mitigate this is to offload the rendering at the edge cloud and stream the video and audio to the thin client. Remote edge-cloud rendering may increase the end-to-end delay of the system due to the added network and processing latency which is greater than local rendering system. We investigate network latency in edge-based remote rendering over NextG networks and identify the bottleneck deteriorating the application performance. Further, we delve into the current state of the art and challenges of performing rendering remotely at the edge cloud and study the associated problems that need to be addressed in order to realize remote augmented reality(AR)/virtual reality (VR). Our prototype implementation shows effectiveness of maintaining the application QoE by prioritizing data at the level of a sub-flow and reducing the motion-to-photon latency.
711,149
Title: Challenges in platform-independent UWB ranging and localization systems Abstract: ABSTRACTThe Ultra-Wideband (UWB) technology has grown in popularity to the point in which there are numerous UWB transceivers on the market that use different center frequencies, bandwidths, or hardware architectures. At the same time, efforts are made to reduce the ranging and localization errors of UWB systems. Until now, not much attention has been dedicated to the cross-platform compatibility of these methods. In this paper, we discuss for the first time the challenges in obtaining platform-independent UWB ranging and localization systems. We derive our observations from a measurement campaign conducted with UWB devices from three different developers. We evaluate the differences in the ranging errors and channel impulse responses of the devices and show how they can affect ranging mitigation methods customized for one device only. Finally, we discuss possible solutions towards platform-independent UWB localization systems.
711,150
Title: Is wifi 802.11mc fine time measurement ready for prime-time localization? Abstract: ABSTRACTWiFi's fine time measurement (FTM) based ranging protocol has set the stage for mass adoption of location-aware applications and services in WiFi-pervading enterprise and consumer ecosystems. However, the lack of deployment of such commercial-scale localization solutions has motivated us to conduct a comprehensive experimental study that aims to verify whether WiFi's FTM is indeed ready for prime-time localization. With heterogeneity in operation (devices, environments, and spectrum) being the fundamental essence of commercial deployments, our study focuses on FTM's ability to deliver useable localization under such practical conditions. Being a first of its kind, our study reveals several interesting insights for practical operation of FTM, with the most critical of them being its inability to eliminate substantial offsets in estimated ranges between heterogeneous devices and configurations that degrade performance significantly (up to 20 m error). Albeit a negative result for FTM's readiness, we also propose a simple but promising remedy - an over-the-top auto-calibration solution that allows every WiFi device, when it enters an enterprise environment, to self-calibrate its offsets on-demand, thereby salvaging FTM to render it useful (median error of 2 m) for localization.
711,151
Title: Handover Predictions as an Enabler for Anticipatory Service Adaptations in Next-Generation Cellular Networks Abstract: ABSTRACTNext-generation networks are envisioned to be empowered by artificial intelligence with predictive capabilities. Predicting handovers in high mobility scenarios enables networks and applications to adapt ahead of time to improve the Quality of Service (QoS). In this paper, we present a two-step machine learning (ML) method, consisting of a classifier and regressor, that can predict the remaining time until a handover occurs. Our approach is validated on a dataset that was captured in a real cellular network. The results show that upcoming handovers can be detected with a recall above 90% and the timing of handovers with an error smaller than one second. Furthermore, we compare the importance of input features derived from radio conditions and user locations for the ML models and discuss deployment scenarios of our approach. In particular, our results suggest that cell-based models perform better than models trained for larger areas.
711,152
Title: Interacting with interactive fault localization tools Abstract: ABSTRACTSpectrum-Based Fault Localization (SBFL) is one of the most popular genres of Fault Localization (FL) methods among researchers. One possibility to increase the practical usefulness of related tools is to involve interactivity between the user and the core FL algorithm. In this setting, the developer provides feedback to the fault localization algorithm while iterating through the elements suggested by the algorithm. This way, the proposed elements can be influenced in the hope to reach the faulty element earlier (we call the proposed approach Interactive Fault Localization, or iFL). With this work, we would like to propose a presentation of our recent achievements in this topic. In particular, we overview the basic approach, and the supporting tools that we implemented for the actual usage of the method in different contexts: iFL4Eclipse for Java developers using the Eclipse IDE, and CharmFL for Python developers using the PyCharm IDE. Our aim is to provide an insight into the practicalities and effectiveness of the iFL approach, while acquiring valuable feedback. In addition, with the demonstration we would like to catalyse the discussion with researchers on the topic.
711,153
Title: Iterating the minimizing Delta debugging algorithm Abstract: ABSTRACTProbably the most well-known solution to automated test case minimization is the minimizing Delta Debugging algorithm (DDMIN). It is widely used because it “just works” on any kind of input. In this paper, we focus on the fixed-point iteration of DDMIN (named DDMIN*), more specifically whether it can improve on the result of the original algorithm. We present a carefully crafted example where the output of DDMIN could be reduced further, and iterating the algorithm finds a new, smaller local optimum. Then, we evaluate the idea on a publicly available test suite. We have found that the output of DDMIN* was usually smaller than the output of DDMIN. Using characters as units of reduction, the output became smaller by 67.94% on average, and in the best case, fixed-point iteration could improve as much as 89.68% on the output size of the original algorithm.
711,154
Title: OpenGL API call trace reduction with the minimizing Delta debugging algorithm Abstract: ABSTRACTDebugging an application that uses a graphics API and faces a rendering error is a hard task even if we manage to record a trace of the API calls that lead to the error. Checking every call is not a feasible or scalable option, since there are potentially millions of calls in a recording. In this paper, we focus on the question of whether the number of API calls that need to be examined can be reduced by automatic techniques, and we describe how this can be achieved for the OpenGL API using the minimizing Delta Debugging algorithm. We present the results of an experiment on a real-life rendering issue, using a prototype implementation, showing a drastic reduction of the trace size (i.e. to less than 1 of the original number of calls) and positive impacts on the resource usage of the replay of the trace.
711,155
Title: KUBO: a framework for automated efficacy testing of anti-virus behavioral detection with procedure-based malware emulation Abstract: ABSTRACTTraditional testing of Anti-Virus (AV) products is usually performed on a curated set of malware samples. While this approach can evaluate an AV's overall performance on known threats, it fails to provide details on the coverage of exact attack techniques used by adversaries and malware. Such coverage information is crucial in helping users understand potential attack paths formed using new code and combinations of known attack techniques. This paper describes KUBO, a framework for systematic large-scale testing of behavioral coverage of AV software. KUBO uses a novel malware behavior emulation method to generate a large number of attacks from combinations of adversarial procedures and runs them against a set of AVs. Contrary to other emulators, our attacks are coordinated by the adversarial procedures themselves, rendering the emulated malware independent of agents and semantically coherent. We perform an evaluation of KUBO on 7 major commercial AVs utilizing tens of distinct attack procedures and thousands of their combinations. The results demonstrate that our approach is feasible, leads to automatic large-scale evaluation, and is able to unveil a multitude of open attack paths. We show how the results can be used to assess general behavioral efficacy and efficacy with respect to individual adversarial procedures.
711,156
Title: Interactive fault localization for Python with CharmFL Abstract: ABSTRACTWe present a plug-in called “CharmFL” for the PyCharm IDE. It employs Spectrum-based Fault Localization to automatically analyze Python programs and produces a ranked list of potentially faulty program elements (i.e., statements, functions, etc.). Our tool offers advanced features, e.g., it enables the users to give their feedback on the suspicious elements to help re-rank them, thus improving the fault localization process. The tool utilizes contextual information about program elements complementary to the spectrum data. The users can explore function call graphs during a failed test. Thus they can investigate the data flow traces of any failed test case or construct a causal inference model for the location of the fault. The tool has been used with a set of experimental use cases.
711,157
Title: Guidelines for GUI testing maintenance: a linter for test smell detection Abstract: ABSTRACTGUI Test suites suffer from high fragility, in fact modifications or redesigns of the user interface are commonly frequent and often invalidate the tests. This leads, for both DOM- and visual-based techniques, to frequent need for careful maintenance of test suites, which can be expensive and time-consuming. The goal of this work is to present a set of guidelines to write cleaner and more robust test code, reducing the cost of maintenance and producing more understandable code. Based on the provided recommendations, a static test suite analyzer and code linter has been developed. An ad-hoc grey literature research was conducted on the state of the practice, by performing a semi-systematic literature review. Authors' experience was coded into a set of recommendations, by applying the grounded theory methodology. Based on these results, we developed a linter in the form of a plugin for Visual Studio Code, implementing 17 of the provided guidelines. The plugin highlights test smells in the Java and Javascript languages. Finally, we conducted a preliminary validation of the tool against test suites from real GitHub projects. The preliminary evaluation, meant to be an attempt of application of the plugin to real test suites, detected three main smells, namely the usage of global variables, the lack of adoption of the Page Object design pattern, and the usage of fragile locator such as the XPath.
711,158
Title: Automation of the creation and execution of system level hardware-in-loop tests through model-based testing Abstract: ABSTRACTIn this paper, we apply model-based testing (MBT) to automate the creation of hardware-in-loop (HIL) test cases. In order to select MBT tools, different tools’ properties were compared to each other through a literature study, with the result of selecting GraphWalker and MoMuT tools to be used in an industrial case study. The results show that the generated test cases perform similarly to their manual counterparts regarding how the test cases achieved full requirements coverage. When comparing the effort needed for applying the methods, a comparable effort is required for creating the first iteration, while with every subsequent update, MBT will require less effort compared to the manual process. Both methods achieve 100% requirements coverage, and since manual tests are created and executed by humans, some requirements are favoured over others due to company demands, while MBT tests are generated randomly. In addition, a comparison between the used tools showcased the differences in the models’ design and their test case generation. The comparison showed that GraphWalker has a more straightforward design method and is better suited for smaller systems, while MoMuT can handle more complex systems but has a more involved design method.
711,159
Title: Adaptive visualization of tourists' preferred spots and streets using trajectory articulation Abstract: ABSTRACTWalking tourism, in which regional resources are organized with interesting themes, can provide visitors with original local walking experiences. Our project aims to collect user data through a mobile application and explore potential geographic resources such as appealing spots and streets for improving city-scale tourism. A density map with GPS trajectory data is one of the easiest ways of visualizing them without any modeling costs. However, both user and technical factors make it difficult to interpret the heatmap in a detailed and concise way. Specifically, analysts have difficulty in deciphering the areas of real interest based on the heat map using the data as areas associated with high density of GPS locations may not be solely due to their attractiveness, e.g., rest areas. In addition, the heat map that does not retain the topography of the streets cannot achieve hot street visualization. In our research, built-in smartphone sensors are employed to distinguish multiple user contexts (e.g., stopping / walking and indoors / outdoors) during their walking tours, which equalize the degree of inherent density biases in each GPS trajectory and add attributes to each location point. Our analysis software accumulates the processed trajectories and generates a density map by applying different weight rules (e.g., a street-oriented rule and an indoor-oriented rule) based on semantic attributes and analytical requests. Our mobile cooperative approach realizes adaptive heatmap generation to the analyzer's expectations, that is, concise hot spots visualization and hot streets visualization.
711,160
Title: Human mobility-based synthetic social network generation Abstract: ABSTRACTLocation-Based Social Networks (LBSNs) combine location information with social networks and have been studied vividly in the last decade. The main research gap is the lack of available and authoritative social network datasets. Publicly available social network datasets are small and sparse, as only a small fraction of the population is captured in the dataset. For this reason, network generators are often employed to generate social networks to study LBSNs synthetically. In this work, we propose an evolving social network implemented in an agent-based simulation to generate realistic social networks. In the simulation, as agents move to different places of interest have the chance to make social connections with other agents as they visit the same place. A large-scale real-world mobility dataset informs the choice of places that agents visit in our simulation. We show qualitatively that our simulated social networks are more realistic than traditional social network generators, including the Erdos-Renyi, Watts-Strogatz, and Barabasi-Albert.
711,161
Title: Spatially weighted structural similarity index: a multiscale comparison tool for diverse sources of mobility data Abstract: ABSTRACTData collected about routine human activity and mobility is used in diverse applications to improve our society. Robust models are needed to address the challenges of our increasingly interconnected world. Methods capable of portraying the dynamic properties of complex human systems, such as simulation modeling, must comply to rigorous data requirements. Modern data sources, like SafeGraph, provide aggregate data collected from location aware technologies. Opportunities and challenges arise to incorporate the new data into existing analysis and modeling methods. Our research employs a multiscale spatial similarity index to compare diverse origin-destination mobility datasets. Established distance ranges accommodate spatial variability in the model's datasets. This paper explores how similarity scores change with different aggregations to address discrepancies in the source data's temporal granularity. We suggest possible explanations for variations in the similarity scores and extract characteristics of human mobility for the study area. The multiscale spatial similarity index may be integrated into a vast array of analysis and modeling workflows, either during preliminary analysis or later evaluation phases as a method of data validation (e.g., agent-based models). We propose that the demonstrated tool has potential to enhance mobility modeling methods in the context of complex human systems.
711,162
Title: Learning fishing information from AIS data Abstract: ABSTRACTThe Automatic Identification System (AIS) allows vessels to emit their position, speed and course while sailing. By international law, all larges vessels (e.g., bigger than 15m in Europe) are required to provide such data. The abundance and free availability of AIS data has created a huge interest in analyzing them (e.g., to look for patterns of how ships move, detailed knowledge about sailing routes, etc.). In this paper, we use AIS data to classify areas (i.e., spatial cells) of the South Atlantic Ocean as productive or unproductive in terms of the quantity of squid that can be caught. Next, together with daily satellite data about the area, we create a training dataset where a model is learned to predict whether an area of the Ocean is productive or not. Finally, real fishing data are used to evaluate the model. As a result, for blind movements (i.e., with no information about real catches in the previous days), our model trained on data generated from AIS obtains a precision that is 18% higher than the model trained on actual fishing data - this is due to AIS data being larger in volume than fishing data, and 36% higher than the precision of the actual decisions of the ships studied. The results show that despite their simplicity, AIS data have potential value in building training datasets in this domain.
711,163
Title: Bridging human mobility to animal activity: when humans are away, bears will play Abstract: ABSTRACTIn the Anthropocene, findings on animal behavioral flexibility in response to anthropogenic changes are accumulating: human presence and activity affect the distribution, movement, activity rhythm, physiology, and diet of animal species. However, conclusions are limited by the lack of simultaneous quantitative data on both the animal and human side. Hence, the dynamic link between animal behavior and human activity and mobility is often poorly estimated. Based on long-term monitoring of a wild bear population in the Trentino region (10 bears monitored from 2006 to 2019; 20 bear-years) combined with human mobility data (Cumulative Outdoor activity Index, derived from the Strava Global Heatmap) and tourist count records, we investigated how spatial behavior and activity rhythms of bears change with variations in experienced human disturbance. We found that bears were mainly nocturnal and that, on an annual scale, nocturnality was associated with movement behavior, but both were independent of experienced human disturbance. Furthermore, nocturnality tended to increase in periods of more intense exploitation of outdoor areas by humans. Overall, these preliminary findings show that bears exhibit a notable behavioral flexibility to minimize their exposure to human presence. Through the application of different sources of human activity data, this work showcases that the integration of high resolution animal movement data with dynamic data on human mobility is crucial to meaningfully catch wildlife responses to anthropisation.
711,164
Title: Influence of air quality on human mobility: a case study using PM & CO data Abstract: ABSTRACTUnderstanding complex human mobility in the current digital era is very difficult, especially in highly populated metropolitan areas. The benefits of knowing urban mobility come in many ways and these include, but are not limited to, informing regional planners to better manage urban transportation, providing policy makers with provide reasonable strategies to reduce greenhouse gas (GHG) emissions, and assisting city officials to promote sustainable infrastructural plans, among many others. In this study, we explored the relationship between air quality data - particulate matter (PM) data (PM10 and PM2.5) and carbon monoxide (CO) - and the human mobility data in South Korea. This relationship is further implemented to an agent-based model (ABM) to replicate population mobility reflected based on these datasets in order to discover and simulate air quality with respect to human mobility.
711,165
Title: A geo-visual analysis for exploring the socioeconomic benefits of the heating electrification using geothermal energy Abstract: ABSTRACTIn parallel to population growth and climate change, the rapid pace of urbanization worldwide has led to an enormous increase in energy demand and costs in urban areas. The subsequent energy burden has become an increasing concern for many households in the U.S. Previous studies have revealed that geothermal resources can effectively lower the electricity demand and carbon emissions in large cities. In this paper, we focus on the socioeconomic impacts of geothermal energy on urban systems by presenting an interactive visual analytics dashboard. The dashboard allows urban planners to spatially examine geothermal energy's practical benefits on energy affordability, urban livability, and resilience across the U.S. We compiled a list of socioeconomic metrics by integrating the simulation results from multiple geothermal and building models with multi-domain urban datasets (socioeconomic, demographic, and electricity utility). These metrics are created to characterize the benefits of the heating electrification of buildings using Geothermal Heat Pumps (GHP) for lowering the energy burden of middle-and low-income households nationwide. The visual dashboard employs a combination of multivariate, glyph-based, and geospatial visualization to reveal the variability and patterns in our metrics. We present a pilot study to demonstrate the GHPs' potential as a renewable and affordable solution for increasing the economic and energy grid resilience in U.S cities.
711,166
Title: Generative adversarial networks for ensemble projections of future urban morphology Abstract: ABSTRACTAs city planners design and adapt cities for future resilience and intelligence, interactions among neighborhood morphological development with respect to changes in population and resultant built infrastructure's impact on the natural environment must be considered. For deep understanding of these interactions, explicit representation of future neighborhoods is necessary for future city modeling. Generative Adversarial Networks (GANs) have been shown to produce spatially accurate urban forms at scales representing entire cities to those at neighborhood and single building scale. Here we demonstrate a GAN method for generating an ensemble of possible new neighborhoods given land use characteristics and designated neighborhood type.
711,167
Title: Stealthy path planning against dynamic observers Abstract: ABSTRACT In virtual environments, research into the problem of stealthy or covert path planning has either assumed fixed and static motion of observers or has used relatively simple probabilistic models that statically summarize potential behavior. In this paper, we introduce a method that dynamically estimates enemy motion in order to plan covert paths in a prototype game environment. We compare our results to other baseline pathfinding methods and conduct an extensive exploration of the many parameters and design choices involved to better understand the impact of different settings on the success of covert path planning in virtual environments. Our design provides a more flexible approach to covert pathfinding problems, and our analysis provides useful insights into the relative weighting of the different factors that can improve design choices in building stealth scenarios.
711,168
Title: Time Reversal and Simulation Merging for Target-Driven Fluid Animation Abstract: ABSTRACT We present an approach to control the animation of liquids. The user influences the simulation by providing a target surface which will be matched by a portion of the liquid at a specific frame of the animation; our approach is also effective for multiple target surfaces forming an animated sequence. A source simulation provides the context liquid animation with which we integrate the controlled target elements. From each target frame, we compute a target simulation in two parts, one forward and one backward, which are then joined together. The particles for the two simulations are initially placed on the target shape, with velocities sampled from the source simulation. The backward particles use velocities in the opposite direction as the forward simulation, so that the two halves join seamlessly. When there are multiple target frames, each target frame simulation is computed independently, and the particles from these multiple target simulations are later combined. In turn, the target simulation is joined to the source simulation. Appropriate steps are taken to select which particles to keep when joining the forward, backward, and source simulations. This results in an approach where only a small fraction of the computation time is devoted to the target simulation, allowing faster computation times as well as good turnaround times when designing the full animation. Source and target simulations are computed using an off-the-shelf Lagrangian simulator, making it easy to integrate our approach with many existing animation pipelines. We present test scenarios demonstrating the effectiveness of the approach in achieving a well-formed target shape, while still depicting a convincing liquid look and feel.
711,169
Title: Learning High-Risk High-Precision Motion Control Abstract: ABSTRACT Deep reinforcement learning (DRL) algorithms for movement control are typically evaluated and benchmarked on sequential decision tasks where imprecise actions may be corrected with later actions, thus allowing high returns with noisy actions. In contrast, we focus on an under-researched class of high-risk, high-precision motion control problems where actions carry irreversible outcomes, driving sharp peaks and ridges to plague the state-action reward landscape. Using computational pool as a representative example of such problems, we propose and evaluate State-Conditioned Shooting (SCOOT), a novel DRL algorithm that builds on advantage-weighted regression (AWR) with three key modifications: 1) Performing policy optimization only using elite samples, allowing the policy to better latch on to the rare high-reward action samples; 2) Utilizing a mixture-of-experts (MoE) policy, to allow switching between reward landscape modes depending on the state; 3) Adding a distance regularization term and a learning curriculum to encourage exploring diverse strategies before adapting to the most advantageous samples. We showcase our features’ performance in learning physically-based billiard shots demonstrating high action precision and discovering multiple shot strategies for a given ball configuration.
711,170
Title: A Practical Method for Butterfly Motion Capture Abstract: ABSTRACT Simulating realistic butterfly motion has been a widely-known challenging problem in computer animation. Arguably, one of its main reasons is the difficulty of acquiring accurate flight motion of butterflies. In this paper we propose a practical yet effective, optical marker-based approach to capture and process the detailed motion of a flying butterfly. Specifically, we first capture the trajectories of the wings and thorax of a flying butterfly using optical marker-based motion tracking. After that, our method automatically fills the positions of missing markers by exploiting the continuity and relevance of neighboring frames, and improves the quality of the captured motion via noise filtering with optimized parameter settings. Through comparisons with existing motion processing methods, we demonstrate the effectiveness of our approach to obtain accurate flight motions of butterflies. Furthermore, we created and will release a first-of-its-kind butterfly motion capture dataset to research community.
711,171
Title: A Tool for Extracting 3D Avatar-Ready Gesture Animations from Monocular Videos Abstract: ABSTRACT Modeling and generating realistic human gesture animations from speech audios has great impacts on creating a believable virtual human that can interact with human users and mimic real-world face-to-face communications. Large-scale datasets are essential in data-driven research, but creating multi-modal gesture datasets with 3D gesture motions and corresponding speech audios is either expensive to create via traditional workflow such as mocap, or producing subpar results via pose estimations from in-the-wild videos. As a result of such limitations, existing gesture datasets either suffer from shorter duration or lower animation quality, making them less ideal for training gesture synthesis models. Motivated by the key limitations from previous datasets and recent progress in human mesh recovery (HMR), we developed a tool for extracting avatar-ready gesture motions from monocular videos with improved animation quality. The tool utilizes a variational autoencoder (VAE) to refine raw gesture motions. The resulting gestures are in a unified pose representation that includes both body and finger motions and can be readily applied to a virtual avatar via online motion retargeting. We validated the proposed tool on existing datasets and created the refined dataset TED-SMPLX by re-processing videos from the original TED dataset. The new dataset is available at https://andrewfengusa.github.io/TED_SMPLX_Dataset.
711,172
Title: Impact of Self-Contacts on Perceived Pose Equivalences Abstract: ABSTRACT Defining equivalences between poses of different human characters is an important problem for imitation research, human pose recognition and deformation transfer. However, pose equivalence is a subjective information that depends on context and on the morphology of the characters. A common hypothesis is that interactions between body surfaces, such as self-contacts, are important attributes of human poses, and are therefore consistently included in animation approaches aiming at retargeting human motions. However, some of these self-contacts are only present because of the morphology of the character and are not important to the pose, e.g. contacts between the upper arms and the torso during a standing A-pose. In this paper, we conduct a first study towards the goal of understanding the impact of self-contacts between body surfaces on perceived pose equivalences. More specifically, we focus on contacts between the arms or hands and the upper body, which are frequent in everyday human poses. We conduct a study where we present to observers two models of a character mimicking the pose of a source character, one with the same self-contacts as the source, and one with one self-contact removed, and ask observers to select which model best mimics the source pose. We show that while poses with different self-contacts are considered different by observers in most cases, this effect is stronger for self-contacts involving the hands than for those involving the arms.
711,173
Title: Learning Gait Emotions Using Affective and Deep Features Abstract: ABSTRACT We present a novel data-driven algorithm to learn the perceived emotions of individuals based on their walking motion or gaits. Given an RGB video of an individual walking, we extract their walking gait as a sequence of 3D poses. Our goal is to exploit the gait features to learn and model the emotional state of the individual into one of four categorical emotions: happy, sad, angry, or neutral. Our perceived emotion identification approach uses deep features learned using long short-term memory networks (LSTMs) on datasets with labeled emotive gaits. We combine these features with gait-based affective features consisting of posture and movement measures. Our algorithm identifies both the categorical emotions from the gaits and the corresponding values for the dimensional emotion components - valence and arousal. We also introduce and benchmark a dataset called Emotion Walk (EWalk), consisting of videos of gaits of individuals annotated with emotions. We show that our algorithm mapping the combined feature space to the perceived emotional state provides an accuracy of 80.07% on the EWalk dataset, outperforming the current baselines by an absolute 13–24%.
711,174
Title: A new framework for the evaluation of locomotive motion datasets through motion matching techniques Abstract: ABSTRACTAnalyzing motion data is a critical step when building meaningful locomotive motion datasets. This can be done by labeling motion capture data and inspecting it, through a planned motion capture session or by carefully selecting locomotion clips from a public dataset. These analyses, however, have no clear definition of coverage, making it harder to diagnose when something goes wrong, such as a virtual character not being able to perform an action or not moving at a given speed. This issue is compounded by the large amount of information present in motion capture data, which poses a challenge when trying to interpret it. This work provides a visualization and an optimization method to streamline the process of crafting locomotive motion datasets. It provides a more grounded approach towards locomotive motion analysis by calculating different quality metrics, such as: demarcating coverage in terms of both linear and angular speeds, frame use frequency in each animation clip, deviation from the planned path, number of transitions, number of used vs. unused animations and transition cost. By using these metrics as a comparison mean for different motion datasets, our approach is able to provide a less subjective alternative to the modification and analysis of motion datasets, while improving interpretability.
711,175
Title: Simulating Fracture in Anisotropic Materials Containing Impurities Abstract: ABSTRACT Fracture simulation of real-world materials is an exceptionally challenging problem due to complex material properties like anisotropic elasticity and the presence of material impurities. We present a graph-based finite element method to simulate dynamic fracture in anisotropic materials. We further enhance this model by developing a novel probabilistic damage mechanics for modelling materials with impurities using a random graph-based formulation. We demonstrate how this formulation can be used by artists for directing and controlling fracture. We simulate and render fractures for a diverse set of materials to demonstrate the potency and robustness of our methods.
711,176
Title: S2M-Net: Speech Driven Three-party Conversational Motion Synthesis Networks Abstract: ABSTRACT In this paper we propose a novel conditional generative adversarial network (cGAN) architecture, called S2M-Net, to holistically synthesize realistic three-party conversational animations based on acoustic speech input together with speaker marking (i.e., the speaking time of each interlocutor). Specifically, based on a pre-collected three-party conversational motion dataset, we design and train the S2M-Net for three-party conversational animation synthesis. In the architecture, a generator contains a LSTM encoder to encode a sequence of acoustic speech features to a latent vector that is further fed into a transform unit to transform the latent vector into a gesture kinematics space. Then, the output of this transform unit is fed into a LSTM decoder to generate corresponding three-party conversational gesture kinematics. Meanwhile, a discriminator is implemented to check whether an input sequence of three-party conversational gesture kinematics is real or fake. To evaluate our method, besides quantitative and qualitative evaluations, we also conducted paired comparison user studies to compare it with the state of the art.
711,177
Title: Toward Believable Acting for Autonomous Animated Characters Abstract: ABSTRACT This paper describes design principles and a system, based on reinforcement learning and procedural animation, to create an autonomous character capable of believable acting—exhibiting a responsive and expressive illusion of interactive life, grounded in its subjective experience of its world. The design principles incorporate knowledge from animation, human-computer interaction, and psychology, articulating guidelines that, when followed, support a viewer’s suspension of disbelief. The system’s reinforcement learning brain generates action, emotion, and attention signals based on motivational drives, and its procedural animation system translates those signals into expressive biophysical movement in real time. We demonstrate the system on a stylized quadruped character in a virtual habitat. In a user study, participants rated the character favorably on animacy and ability to experience emotions, which is consistent with finding the character believable.
711,178
Title: Automatic generation of network function accelerators using component-based synthesis Abstract: ABSTRACTDesigning networked systems that take best advantage of heterogeneous dataplanes - e.g., dividing packet processing across both a PISA switch and an x86 CPU - can improve performance, efficiency, and resource consumption. However, programming for multiple hardware targets remains challenging because developers must learn platform-specific languages and skills. While some 'write-once, run-anywhere' compilers exist, they are unable to consider a range of implementation options to tune the NF to meet performance objectives. In this short paper, we explore preliminary ideas towards a compiler that explores a large search space of different mappings of functionality to hardware. This exploration can be tuned for a programmer-specified objective, such as minimizing memory consumption or maximizing network throughput. Our initial prototype, SyNAPSE, is based on a methodology called component-based synthesis and supports deployments across x86 and Tofino platforms. Relative to a baseline compiler which only generates one deployment decision, SyNAPSE uncovers thousands of deployment options - including a deployment which reduces the amount of controller traffic by an order of magnitude, and another deployment which halves memory usage.
711,179
Title: Synthesizing state machines for data planes Abstract: ABSTRACTThe emergence of programmable switches such as the Intel Tofino has made it possible, in theory, to implement many network monitoring applications directly in the network data plane. In practice, however, such implementations are often more challenging than expected. A key difficulty is that such applications often depend, in part, on recognizing traffic patterns that are easy to specify as a deterministic finite state automaton (a DFA) but hard to implement thanks to stringent hardware constraints: to maximize throughput and avoid race conditions, state machine updates must be completed in a single Tofino pipeline stage, but the limited computational resources make finding an implementation a challenging puzzle. This paper presents a solution to such puzzles---a general framework for synthesizing DFA implementations automatically. A key insight is that such a synthesis system is free to renumber state machine states and implement transitions using any available arithmetic or logical operations over that renumbering, provided the resulting implementation is semantically equivalent to the input specification. To produce such a synthesizer, we model the required state machine semantics and the available single-stage switch operations using SMT constraints. An off-the-shelf SMT solver finds a solution to the constraints, and this solution is then translated to P4 code. We evaluate the effectiveness of our methods by synthesizing state machines for a variety of useful applications, including those that monitor TCP handshakes and video conference streams.
711,180
Title: Practical handling of DNS in the data plane Abstract: ABSTRACTThe Domain Name System (DNS) is a significant component of modern-day internet. Despite this fact, DNS traffic is mostly unencrypted, and as such a likely target for exploitation by malicious actors. The advancement of programmable switches presents researchers with the opportunity to explore DNS traffic from a new vantage point, without sacrificing network bandwidth. In spite of the incentive, DNS research in programmable switches has been scarce, owing to the difficulty in parsing DNS packets. We present a general solution to DNS packet parsing that can handle the vast majority of DNS packets (97%) using current hardware and can easily be scaled to parse all DNS packets as hardware improves. Our highly configurable solution can be adjusted to fit many distinct use cases. Additionally, we explore the challenges involved in parsing DNS packets and present common pitfalls appearing in previous research attempting to do so.
711,181
Title: Traffic engineering: from ISP to cloud wide area networks Abstract: ABSTRACTWe conduct a systematic review of two decades of work on wide-area network traffic engineering (TE). We summarize the contributions of important TE algorithms and systems developed for Internet Service Provider and cloud wide-area networks. We study the evolution of the goals of TE (from performance to reliability), TE system design (from decentralized to fully-centralized to partly-centralized) and the technology used in deploying these systems in large commercial networks (from vendor-specific protocols to software-defined implementations). We define a taxonomy of TE systems to categorize developments in TE research. We identify trends at the forefront of TE research and practice to motivate an agenda for future work in this area. Finally, to aid future work, we are releasing our summaries and implementations of several recent TE algorithms.1.
711,182
Title: Mass surveillance of VoIP calls in the data plane Abstract: ABSTRACTOver the last decade, programmable data planes have enabled highly customizable and efficient packet processing in commercial off-the-shelf hardware. Although researchers have demonstrated various use cases of this technology, its potential misuse has gained much less traction. This work investigates a typical surveillance scenario, VoIP call identification and monitoring, through a tailored data-plane attack. We introduce DELTA, a network-level side-channel attack that can efficiently identify VoIP calls and their hosting services. DELTA achieves this by tracking the inherent network footprint of VoIP services in the data plane. Specifically, DELTA stores the user addresses recently connected to VoIP services and links potential call flows with these addresses. We implement DELTA on existing hardware and conduct high-throughput tests based on representative traffic. DELTA can simultaneously store around 100 000 VoIP connections per service and identify call streams in-path, at line-rate, inside terabits of Internet traffic per second, immediately revealing users' communication patterns.
711,183
Title: Flow-level loss detection with Δ-sketches Abstract: ABSTRACTPacket drops caused by congestion are a fundamental problem in network operation. Yet, it is difficult to detect where drops are happening, let alone which flows are most affected. Detecting the small-timescale drops caused by short bursts of traffic is even more challenging, and traditional monitoring techniques can easily miss them. To uncover packet drops as they occur inside a switch, the analysis must be real-time, fine-grained, and efficient. However, modern switches have distributed packet-processing pipelines that see either the arriving or departing traffic, but not the packet drops. Additionally, they do not have enough memory to store per-flow state. Our MIDST system addresses these challenges through a distributed compact data structure with lightweight coordination between ingress and egress pipelines. MIDST identifies the flows experiencing loss, as well as the bursty flows responsible, across different burst durations. Our evaluation with real-world traces and TCP connections shows that MIDST uses little memory (e.g., 320KB) while providing high accuracy (95% to 98%) under varying loss rates and burst durations. We evaluate a low-rate DDoS attack and demonstrate the potential use of our measurement results for attack detection and mitigation.
711,184
Title: NetGVT: offloading global virtual time computation to programmable switches Abstract: ABSTRACTDistributed discrete-event simulation is an essential method for analyzing large-scale models, including weather forecast and network simulations. A distributed simulation often requires synchronizing state among the different parts of the model according to a global virtual time (GVT). However, existing approaches require multiple round-trip times to a server to compute a new GVT value. In this paper, we propose NetGVT, a system that computes GVT using programmable switches, thereby avoiding the round-trip latency of a server-based solution. In particular, our design is concerned with two main constraints of the switch programming model: the limited number of arithmetic and logic operations and the limited memory available on the device. We aggregate computations and unroll them across different pipeline stages in a hierarchical manner to address the former. Then, we adopt compression mechanisms to store a short representation of virtual clocks in the on-chip registers to tackle the memory limitations. We implemented a prototype of NetGVT and evaluated its performance with a synthetic lock-step simulation in a Tofino switch. Our results demonstrate that NetGVT outperforms techniques that do not rely on in-network computing by 40% in terms of distributed simulations completion time.
711,185
Title: AccelUPF: accelerating the 5G user plane using programmable hardware Abstract: ABSTRACTThe latest generation of 5G telecommunication networks are expected to provide high throughput and low latency while catering to diverse applications like mobile broadband, dense IoT, and self-driving cars. A high performance User Plane Function (UPF), the main element in the 5G user plane, is critical to achieving these performance goals. This paper presents AccelUPF, a 5G UPF that offloads functionality to programmable dataplane hardware for performance acceleration. While prior work has proposed accelerating the UPF by offloading its data forwarding functionality to programmable hardware, the Packet Forwarding Control Protocol (PFCP) messages from the control plane that configure the hardware data forwarding rules were still processed in software. We show that only offloading data forwarding and not PFCP message processing leads to suboptimal performance in the UPF for applications like IoT that have a much higher ratio of PFCP messages to data traffic, due to a bottleneck at the software control plane that configures the hardware packet forwarding rules. In contrast to prior work, AccelUPF offloads both PFCP message processing as well as data forwarding to programmable hardware. AccelUPF overcomes several technical challenges pertaining to the processing of the complex variable-sized PFCP messages within the memory and compute constraints of programmable hardware platforms. Our evaluation of AccelUPF implemented over a Netronome programmable NIC and an Intel Tofino programmable switch demonstrates performance gains over the state-of-the-art UPFs for real-world traffic scenarios.
711,186
Title: Does Audio help in deep Audio-Visual Saliency prediction models? Abstract: ABSTRACTDespite existing works of Audio-Visual Saliency Prediction (AVSP) models claiming to achieve promising results by fusing audio modality over visual-only models, these models fail to leverage audio information. In this paper, we investigate the relevance of audio cues in conjunction with the visual ones and conduct extensive analysis by employing well-established audio modules and fusion techniques from diverse correlated audio-visual tasks. Our analysis on ten diverse saliency datasets suggests that none of the methods worked for incorporating audio. Furthermore, we bring to light, why AVSP models show a gain in performance over visual-only models, though the audio branch is agnostic at inference. Our work questions the role of audio in current deep AVSP models and motivates the community to a clear avenue for reconsideration of the complex architectures by demonstrating that simpler alternatives work equally well.
711,187
Title: Structured Multimodal Fusion Network for Referring Image Segmentation Abstract: ABSTRACT Referring image segmentation aims to segment one particular object referred by a natural language expression in the image. One major challenge of this task is how to understand and align vision and language to distinguish the referent. Another major challenge is how to refine the segmentation mask of the referent. In this paper, we focus on dissecting and enhancing the interaction between modalities to address these challenges. Specifically, we propose a Structured Multimodal Fusion Network (SMFN), which consists of a multimodal tree, a cross-modal transformer, and a mask refinement module. SMFN first exploits multimodal fusion structures to deeply integrate visual and linguistic features so that the referent can be accurately distinguished and then further utilizes a mask refinement module to aggregate multi-scale visual features to clarify boundaries. We conduct extensive experiments on the four benchmark datasets and achieve new state-of-the-art performances under different evaluation metrics.
711,188
Title: Exploratory Study on the Perception of Intelligent Virtual Agents With Non-Native Accents Using Synthetic and Natural Speech in German Abstract: ABSTRACT This paper presents an exploratory study which investigates the impact of different non-native accents and the naturalness of speech on the correct assignment of an Intelligent Virtual Agent’s (IVA) mother tongue, as well as its perceived warmth, competence and intelligibility. An online-experiment with a between subjects design was conducted, in which the participants, who were native speakers of German, watched a video of an IVA that spoke German with a non-native accent. The IVA’s speech was either synthetically generated or pre-recorded using non-native speakers. The participants experienced an IVA with either a Turkish, Italian or Polish accent, based on the most frequent accents in the German-speaking area. The results revealed that the IVA’s accent impacted its perceived warmth, but not its perceived competence and intelligibility. The IVA’s naturalness of speech played no role in its classification as a non-native speaker of German but on the correctness of the assigned mother tongue within the Polish accent condition. These results give valuable insight in the perception of non-native speaking IVAs and constitute helpful implications for future research with mixed-cultural IVAs.
711,189
Title: Detecting Change Talk in Motivational Interviewing using Verbal and Facial Information Abstract: ABSTRACT Behavior change is one of the most important goals in psychotherapy. This study focuses on Motivational Interviewing (MI), which is collaborative communication aimed at eliciting the client’s own reasons for behavior change. To investigate the effectiveness of facial information in modeling MI, we collected an MI encounter corpus with speech and video data in the nutrition and fitness domains and annotated client utterances using the Manual for the Motivational Interviewing Skill Code (MISC). By analyzing client answers to the questions after the session, we found that clients who expressed more Change Talk were more motivated to change their behavior than those who expressed less Change Talk. We then proposed RNN-based multimodal models to detect Change Talk by setting a 2-class classification task: "Change Talk" and "not Change Talk." Our experiment showed that the best performing model was a multimodal BiLSTM model that fused language and client facial information. We also found that fusing language and facial information as context achieved better performance than the unimodal and no-context models. Moreover, we discuss the label imbalance problem and conduct an additional analysis using turns as a unit of analysis. As a result, our best model reached F1-score of 0.65 for Change Talk detection.
711,190
Title: Towards creating a conversational memory for long-term meeting support: predicting memorable moments in multi-party conversations through eye-gaze Abstract: ABSTRACT When working in a group, it is essential to understand each other’s viewpoints to increase group cohesion and meeting productivity. This can be challenging in teams: participants might be left misunderstood and the discussion could be going around in circles. To tackle this problem, previous research on group interactions has addressed topics such as dominance detection, group engagement, and group creativity. Conversational memory, however, remains a widely unexplored area in the field of multimodal analysis of group interaction. The ability to track what each participant or a group as a whole find memorable from each meeting would allow a system or agent to continuously optimise its strategy to help a team meet its goals. In the present paper, we therefore investigate what participants take away from each meeting and how it is reflected in group dynamics.As a first step toward such a system, we recorded a multimodal longitudinal meeting corpus (MEMO), which comprises a first-party annotation of what participants remember from a discussion and why they remember it. We investigated whether participants of group interactions encode what they remember non-verbally and whether we can use such non-verbal multimodal features to predict what groups are likely to remember automatically. We devise a coding scheme to cluster participants’ memorisation reasons into higher-level constructs. We find that low-level multimodal cues, such as gaze and speaker activity, can predict conversational memorability. We also find that non-verbal signals can indicate when a memorable moment starts and ends. We could predict four levels of conversational memorability with an average accuracy of 44 %. We also showed that reasons related to participants’ personal feelings and experiences are the most frequently mentioned grounds for remembering meeting segments.
711,191
Title: Emotions Matter: Towards Personalizing Human-System Interactions Using a Two-layer Multimodal Approach Abstract: ABSTRACT Monitoring and predicting user task performance is critical as it provides valuable insights for developing personalized human-system interactions. Key factors that impact task performance include cognitive workload, physiological responses, and affective states. However, a lack of consideration of any of these factors could lead to inaccurate task performance prediction because of their interplay. To address this challenge, we developed a novel hierarchical machine learning approach that considers these three factors to predict task performance. We exposed twenty-eight participants to a two-step experimental study. The first step aimed to induce different affective states using a validated video database. The second step required participants to perform validated low and high cognitive workload-inducing tasks. To evaluate the performance, we compared the models developed using the hierarchical approach that uses emotional and physiological information, to models that use only physiological information. We observed that our proposed approach always outperformed the models that only use physiological information to predict task performance by achieving a better average person independent mean absolute error. However, information gained across various models using the hierarchical approach was not linear. Additionally, we found that the top predictors for each model varied, and the model with the highest information gain included emotional features. These findings suggest the importance of choosing the appropriate machine learning model and predictors for building robust models for predicting task performance.
711,192
Title: A Spatio-temporal Learning for Music Conditioned Dance Generation Abstract: ABSTRACT The music-conditioned dance generation, i.e., dancing to music, is a usage scenario of multi-modality human motion synthesis. Typically, it is a challenge to choreograph continuous motions coinciding with the melody and rhythm of the music. This paper proposes a position-wise encoding-decoding framework for spatio-temporal learning of motions and long-term skeleton-based dance generation oriented on music. Given the positional embedding of the frames in 1-minute video clips, firstly, we modularize a regional attention-based feed-forward mechanism to encode the music features. Secondly, based on the skeleton of each frame and the joint trajectories across motion frames, we formalize a graph topology to represent each dance sequence’s spatial and temporal knowledge. Specifically, we propose a graph convolutional network (GCN) based blocks to process long-term dependencies of motions and leverage the spatial and temporal features. Both music and motion paths are learned fully in positional embedding schemes and constructed by repeating the corresponding blocks. Finally, as the task of dance generation is inherently the consistency between music and motions, we proposed a cross-modality feature fusion for multimodal interaction and music-conditioned dance generation. Experimental results demonstrate that our method outperforms state-of-art methods in motion quality and motion-music correlation metrics.
711,193
Title: The Effects of an Embodied Pedagogical Agent’s Synthetic Speech Accent on Learning Outcomes Abstract: ABSTRACT Modern text-to-speech engines can be an effective speech choice for embodied virtual pedagogical agents. However, it is not known how synthesized accents influence learning outcomes and perceptions of the agent. In this paper, we conducted a between-subjects experiment (n=60) to determine the effect of a pedagogical agent’s machine synthesized text-to-speech accent (United States English or Indian English) on learning outcomes and perceptions of the agent for students in the United States. Our results indicate that learner gender interacts with synthesized speech accent to significantly affect learning outcomes and perceptions of the agent. Our results reveal that a foreign synthetic speech accent may affect the learning outcomes of female university students (n=30), but not male university students (n=30). Finally, our results indicate that learner gender interacts with synthesized speech accent to affect perceptions of the pedagogical agent’s human-likeness. We provide novel insights on the differences between male and female learners for interactions with pedagogical agents with synthetic TTS accents.
711,194
Title: Review of realistic behavior and appearance generation in embodied conversational agents: A comparison between traditional and modern approaches Abstract: ABSTRACT With recent technological advancements, many firms that formerly relied on traditional face-to-face conversations have switched to an online conversation mode. This has not only helped businesses to increase their revenues, but it has also enabled users and customers to access world-class services. Due to scalability concerns with these platforms, many of the online conversations have lost the personal touch associated with a face-to-face based communication. An embodied conversational agent (ECA) can address this void by creating suitable behavior as well as the required voice output via a realistic-looking avatar. However, in order to scale up to a large customer base, the behavior and appearance of agents must be adequately modeled. Traditionally, rule-based methods were used to generate the animations associated with the avatars, but because of their limitations modern approaches use deep learning models to create end-to-end systems. We present various conventional and current methodologies for behavior and appearance modeling in our work. We will discuss similarities between these systems as well as their limitations. We believe that our work will be useful in developing a hybrid system that uses both traditional and modern approach to handle these challenges in creating modern embodied conversational agents.
711,195
Title: Unpretty Please: Ostensibly Polite Wakewords Discourage Politeness in both Robot-Directed and Human-Directed Communication Abstract: ABSTRACT For enhanced performance and privacy, companies deploying voice-activated technologies such as virtual assistants and robots are increasingly tending toward designs in which technologies only begin attending to speech once a specified wakeword is heard. Due to concerns that interactions with such technologies could lead users, especially children, to develop impolite habits, some companies have begun to develop use modes in which interactants are required to use ostensibly polite wakewords such as “ Please”. In this paper, we argue that these “please-centering” wakewords are likely to backfire and actually discourage polite interactions due to the particular types of lexical and syntactic priming induced by those wakewords. We then present the results of a human-subject experiment (n=90) that validates those claims.
711,196
Title: Group Formation in Multi-Robot Human Interaction During Service Scenarios Abstract: ABSTRACT In this paper, we explored how a mobile robot should join an existing multimodal interaction between a person and a stationary robot. We developed three different strategies (Circular, Line, and Improper) and an interactive system to put the strategies in practice. Circular (all interactants stand in a circle) and Line (two robots stand in a line facing a person) were based on existing human group spatial arrangements, whereas Improper (second robot stands far from the person and stationary robot) explored how a bad arrangement might change a user’s position and perception. We also investigated how scenarios with different tasks influenced human positions at different stages of the interaction. We conducted a 3x4 mixed design, in-person user study and found that participants in the Improper condition repositioned themselves to decrease the distance differences between interactants. We also conducted an exploratory analysis on the spatial data to better understand how user actions and social cues, such as gaze and vocalizations, changed spatial behaviors.
711,197
Title: Two-Step Gaze Guidance Abstract: ABSTRACT One challenge of providing guidance for search tasks consists in guiding the user’s visual attention to certain objects in a potentially large search space. Previous work has tried to guide the user’s attention by providing visual, audio, or haptic cues. The state-of-the-art methods either provide hints pointing towards the approximate direction of the target location for a fast but less accurate search or require the user to perform a fine-grained search from the beginning for a precise yet less efficient search. To combine the advantage of both methods, we propose an interaction concept called Two-Step Gaze Guidance. The first-step guidance focuses on quick guidance toward the approximate direction, and the second-step guidance focuses on fine-grained guidance toward the exact location of the target. A between-subject study (N = 69) with five conditions was carried out to compare the two-step gaze guidance method with the single-step gaze guidance method. Results revealed that the proposed method outperformed the single-step gaze guidance method. More precisely, the introduction of Two-Step Gaze Guidance slightly improves the searching accuracy, and the use of spatial audio as the first-step guidance significantly helps in enhancing the searching efficiency. Our results also indicated several design suggestions for designing gaze guidance methods.
711,198
Title: EdgeSelect: Smartwatch Data Interaction with Minimal Screen Occlusion Abstract: ABSTRACT We present EdgeSelect, a linear target selection interaction technique that utilizes a small portion of the smartwatch display, explicitly designed to mitigate the ‘fat finger’ and screen occlusion problems, two of the most common and well-known challenges when interacting with small displays. To design our technique, we first conducted a user study to answer which segments of the smartwatch display have the least screen occlusion while users are interacting with it. We use results from the first experiment to introduce EdgeSelect, a three-layer non-linear interaction technique, which can be used to interact with multiple co-adjacent graphs on the smartwatch by using a region that is the least prone to finger occlusion. In a second experiment, we explore the density limits of the targets possible with EdgeSelect. Finally, we demonstrate the generalizability of EdgeSelect to interact with various types of content.
711,199
Title: Pull Gestures with Coordinated Graphics on Dual-Screen Devices Abstract: ABSTRACT A new class of dual-touchscreen device is beginning to emerge, either constructed as two screens hinged together, or as a single display that can fold. The interactive experience on these devices is simply that of two 2D touchscreens, with little to no synergy between the interactive areas. In this work, we consider how this unique, emerging form factor creates an interesting 3D niche, in which out-of-plane interactions on one screen can be supported with coordinated graphics in the other orthogonal screen. Following insights from an elicitation study, we focus on "pull gestures", a multimodal interaction combining on-screen touch input with in air movement. These naturally complement traditional multitouch gestures such as tap and pinch, and are an intriguing and useful way to take advantage of the unique geometry of dual-screen devices.
711,200
Title: The Impact of Thermal Cues on Affective Responses to Emotionally Resonant Vibrations Abstract: ABSTRACTThis paper investigates how presenting emotionally resonant vibrotactile stimuli at cool, neutral and warm temperature levels impacts mean ratings for emotional resonance and affective response. Affective vibrotactile stimuli can elicit pleasant or calming responses, making them applicable for emotion regulation. Evoking real-world sensations via emotional resonance can widen their affective range and improve their effectiveness, and allow them to enhance immersive multimodal experiences. Thermotactile cues have been shown to affect emotional responses, but have not been combined with emotionally resonant vibrations to see how they change responses to such cues. This study (n=20) assessed the impact of 3 temperature levels (24℃, 30℃, and 34℃) on 15 emotionally resonant vibrotactile cues and observed if emotionally resonant stimuli exceeded the affective range non-resonant vibrotactile stimuli. The findings suggest that presenting specific resonant vibrations at temperatures that are appropriate for the sensation they evoke can improve emotional resonance and vice versa. In addition, temperature had a positive effect on affective response and emotionally resonant vibrations were found to have a wider affective range than traditional vibrotactile cues. These findings support using emotionally resonant vibrations and thermal cues to elicit desirable emotional responses in emotion regulation and immersive media applications.
711,201
Title: Multimodal Representations and Assessments of Emotional Fluctuations of Speakers in Call Centers Conversations Abstract: ABSTRACTThe question I am addressing in my Ph.D. research is multimodal representations and assessments of emotional fluctuations of speakers in call center conversations. Emotion detection in human conversations has attracted the increasing attention of researchers over the last three decades. Various machine learning models have been developed from detecting six basic emotions to more subtle, complex dimensional emotions, demonstrating a promising result. However, in real-life use cases, the complexity of data and the cost of human annotation remain challenging. In my research, I will work on various real-life conversations, by focusing on real-life data processing, emotional data annotation, and multimodal emotion recognition system design, to build robust and ethical automatic emotion recognition systems.
711,202
Title: Multimodal Representation Learning For Real-World Applications Abstract: ABSTRACTMultimodal representation learning has shown tremendous improvements in recent years. An extensive set of works for fusing multiple modalities have shown promising results on the public benchmarks. However, most famous works target unrealistic settings or toy datasets, and a considerable gap exists between the real-world implications of the existing methods. In this work, we aim to bridge the gap between the well-defined benchmark settings and the real-world use cases. We aim to explore architectures inspired by existing promising approaches that have the potential to be implemented in real-world instances. Moreover, we also try to move the research forward by addressing questions that can be solved using multimodal approaches and have a considerable impact on the community. With this work, we attempt to leverage the multimodal representation learning methods, which directly apply to real-world settings.
711,203
Title: Mood-Emotion Interplay: A Computational Perspective Abstract: ABSTRACT Emotion and mood influence human cognition, memory, judgement, and decision making abilities. The two affective states differ in terms of duration, intensity, and the cause of occurrence. While studies in psychology demonstrate the interactions between mood and emotion, on the contrary, research in the affective computing domain has predominantly focused on emotion inference from various modalities like facial expression, speech, physiological signals, etc., and hardly any studies exist either on mood inference or examining the relation between mood and emotion. In this regard, the objective of this research is to simultaneously model the two affective states even though they might occur on different timelines. By using in-the-wild video data and leveraging the emotion information, the aim of this research is to infer mood and observe the mood-emotion interplay. To this end, the research to date uses AFEW-VA dataset and runs experiments using temporal emotional change in addition to the mood ratings to predict mood. This approach demonstrates that mood prediction is enhanced when emotional change labels are incorporated apart from the mood labels.
711,204
Title: Interdisciplinary Corpus-based Approach for Exploring Multimodal Conversational Feedback Abstract: ABSTRACT During spontaneous conversation, interlocutors have three possible actions: speak, be silent or produce feedback. In order to better understand the mechanisms that render spontaneous interactions successful, this PhD research focuses on conversational feedback. It is the reactions/responses produced by an interlocutor in a listening position. Feedback is a phenomenon of deep importance for the quality of the interaction. It allows interlocutors to share relevant information about understanding, establishment/upgrading of the common ground, engagement and shared representations. The objective of the PhD is to propose a multimodal model of conversational feedback. The methodological approach is interdisciplinary, combining a corpus analysis, based on machine learning enhanced by a linguistic interpretation. The resulting model will be evaluated through its integration in an Embodied Conversational Agent (ECA) with perspective studies.
711,205
Title: Real-time Feedback for Developing Conversation Literacy Abstract: ABSTRACT Learning Analytics (LA) has exploded with the growth in Machine Learning applications. LA, while not new, is always changing, and now allows for automated analyses that used to seem impossible. There is still a need for real-time systems while also centering the design around students and data. By using multiple data streams to provide real-time Learning Analytics, this paper aims to describe a plan for creating a system that aids teachers managing classes and improve the collaboration literacy of students while taking into account ethical, privacy, and user concerns. By surveying and engaging with students and teachers, general guidelines in terms of ethics and privacy in LA will be designed. Along with the data, there will be a way to best present this information in a useful and engaging method to students and teachers that would be determined through studies on the efficacy of different User Interfaces (UIs) and data views along with the surveying.
711,206
Title: Physiological Sensing for Media Perception & Activity Recognition Abstract: ABSTRACT Wearable sensors have the intriguing potential to continuously evaluate human physiological characteristics in real-time without being obtrusive. This thesis aims to incorporate physiological sensors data to investigate the Media Perception and Activity Recognition. Our primary research goals include (a) neural encoding-based psycho-acoustic attribute analysis for data sonification, (b) empirical evidence for perceptual subjectivity in neural encoding during human-media interactions, the impact of incorporating behavioral ratings, and (c) the efficacy of attention-based transformer models on physiological data on human activity recognition problems.
711,207
Title: Towards Human-Machine Collaboration: Multimodal Group Potency Estimation Abstract: ABSTRACT Collaboration between scholars in Computer and Social Sciences is focusing on intelligent computer systems that can promote team functioning, in hybrid human-machine teams. In order to do that, these systems need to be able to sense group phenomena, such as emergent states, and to devise appropriate strategies of intervention. Among emergent states, group potency – the confidence of the group that it can be effective – is well known to be linked to group effectiveness. This PhD project aims to investigate group potency nature and to provide a computational model to capture its dynamics. This will pave the way towards designing an appropriate intervention strategy to increase group potency and foster group effectiveness.
711,208
Title: Designing Hybrid Intelligence Techniques for Facilitating Collaboration Informed by Social Science Abstract: ABSTRACT Designing (socially) intelligent systems for facilitating collaborations in human-human and human-AI teams will require them to have a basic understanding of principles underlying social decision-making. Partner selection - the ability to identify and select suitable partners for collaborative relationships - is one relevant component of social intelligence and an important ingredient for successful relationship management. In everyday life, decision to engage in joint undertakings are often based on impressions made during social interactions with potential partners. These impressions, and consequently, partner selection are informed by (non)-verbal behavioral cues. Despite its importance, research investigating how these impressions and partner selection decisions unfold in naturalistic settings seem to be lacking. Thus, in this paper, we present a project focused on understanding, predicting and modeling partner selection and understanding its relationship with human impressions in semi- naturalistic settings, such as social interactions, with the aim of informing future designing approaches of (hybrid) intelligence system that can understand, predict and aid in initiating and facilitating (current and future) collaborations.
711,209
Title: Effective Human-Robot Collaboration via Generalized Robot Error Management Using Natural Human Responses Abstract: ABSTRACT Robot errors during human-robot interaction will be unavoidable. Their impact on the collaboration is dependent on the human’s perception of it and how timely it is detected and recovered from. Prior work in robot error management often uses task or error specific information for reliable management and so is not generalizable to other contexts or error types. To achieve generalized error management, one approach is the use of human response as input. My PhD thesis will focus on enabling effective human-robot interaction through leveraging users’ natural multimodal response to errors to detect, classify, mitigate, and recover from them. This extended abstract details my past, current, and future work towards this goal.
711,210
Title: Non-verbal Signals in Oral History Archives Abstract: ABSTRACT Oral History Archives (OHA) are a rich source of emotional narratives, encapsulating the personal stories of people across different demographics, historical periods, and cultures. Computational technologies have transformed the oral history archival field by facilitating the transcription and verbal content analysis of interview collections where manual inspection is too time-consuming. However, these methods fail to include the subjective part of the archives. In this project, we explore the potential of automatic breathing patterns and non-verbal cues analysis applied to OHA interviews to gain new insights into the individual and collective emotional responses across different demographics. The proposed framework will investigate if automatic breathing signal prediction enhances the performance of speech emotion recognition models and if a cross-dataset learning approach for breathing signal prediction and paralinguistics analysis will work in OHA. Next, we will further use the emotional information gathered to study cultural differences when it comes to narrating traumatic experiences, focusing on different OHA collections. Lastly, to enhance our research and the literature, we will also design emotion elicitation experiments to create new emotional speech breathing datasets.
711,211
Title: SleepGuru: Personalized Sleep Planning System for Real-life Actionability and Negotiability Abstract: ABSTRACT Widely-accepted sleep guidelines advise regular bedtimes and sleep hygiene. An individual’s adherence is often viewed as a matter of self-regulation and anti-procrastination. We pose a question from a different perspective: What if it comes to a matter of one’s social or professional duty that mandates irregular daily life, making it incompatible with the premise of standard guidelines? We propose SleepGuru, an individually actionable sleep planning system featuring one’s real-life compatibility and extended forecast. Adopting theories on sleep physiology, SleepGuru builds a personalized predictor on the progression of the user’s sleep pressure over a course of upcoming schedules and past activities sourced from her online calendar and wearable fitness tracker. Then, SleepGuru service provides individually actionable multi-day sleep schedules which respect the user’s inevitable real-life irregularities while regulating her week-long sleep pressure. We elaborate on the underlying physiological principles and mathematical models, followed by a 3-stage study and deployment. We develop a mobile user interface providing individual predictions and adjustability backed by cloud-side optimization. We deploy SleepGuru in-the-wild to 20 users for 8 weeks, where we found positive effects of SleepGuru in sleep quality, compliance rate, sleep efficiency, alertness, long-term followability, and so on.
711,212