{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:43:38.815135Z" }, "title": "GreaseVision: Rewriting the Rules of the Interface", "authors": [ { "first": "Siddhartha", "middle": [], "last": "Datta", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": {} }, "email": "siddhartha.datta@cs.ox.ac.uk" }, { "first": "Konrad", "middle": [], "last": "Kollnig", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": {} }, "email": "konrad.kollnig@cs.ox.ac.uk" }, { "first": "Nigel", "middle": [], "last": "Shadbolt", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Oxford", "location": {} }, "email": "nigel.shadbolt@cs.ox.ac.uk" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Digital harms can manifest across any interface. Key problems in addressing these harms include the high individuality of harms and the fast-changing nature of digital systems. We put forth GreaseVision, a collaborative human-inthe-loop learning framework that enables endusers to analyze their screenomes to annotate harms as well as render overlay interventions. We evaluate HITL intervention development with a set of completed tasks in a cognitive walkthrough, and test scalability with one-shot element removal and fine-tuning hate speech classification models. The contribution of the framework and tool allow individual end-users to study their usage history and create personalized interventions. Our contribution also enables researchers to study the distribution of multi-modal harms and interventions at scale.", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Digital harms can manifest across any interface. Key problems in addressing these harms include the high individuality of harms and the fast-changing nature of digital systems. We put forth GreaseVision, a collaborative human-inthe-loop learning framework that enables endusers to analyze their screenomes to annotate harms as well as render overlay interventions. We evaluate HITL intervention development with a set of completed tasks in a cognitive walkthrough, and test scalability with one-shot element removal and fine-tuning hate speech classification models. The contribution of the framework and tool allow individual end-users to study their usage history and create personalized interventions. Our contribution also enables researchers to study the distribution of multi-modal harms and interventions at scale.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The design of good user interfaces can be challenging. In a fast changing world, however, with sometimes highly individual needs, traditional one-fitsall software development faces difficulty in keeping up with the pace of change and the breadth of user requirements. At the same time, the digital world is rife with a range of harms, from dark patterns to hate speech to violence. This paper takes a step back to improve the user experience in the digital world. To achieve this, we put forward a new design philosophy for the development of software interfaces that serves its users: GreaseVision. Contributions: Our work aims to contribute a novel interface modification framework, which we call GreaseVision. At a structural-level, our framework enables end-users to develop personalized interface modifications, either individually or collaboratively. This is supported by the use of screenome visualization, human-in-the-loop learning, and an overlay/hooks-enabled low-code development platform. Within the defined scopes, we enable the aggregation of distributionally-wide end-user digital harms (self-reflection for end-users, or analyzing the harms dataset of text, images and elements for researchers), to further enable the modification of user interfaces across a wide range of software systems, supported by the usage of visual overlays, autonomously developed by users, and enhanced by scalable machine learning techniques. We provide complete and reproducible implementation details to enable researchers to not only study harms and interventions, but other interface modification use cases as well. Structure: We introduce the challenge of end-user interface modification in Sections 1 and 2 to curb digital harms. We share our proposed method -GreaseVision -in Section 3. We evaluate our method in Section 4, and share final thoughts and conclusions in Section 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We summarize key related work here; for detailed related works, we refer the reader to Appendix: Section 6.2. Our motivating problem is the high individuality of digital harms across a distribution of users. The harms landscape is quickly changing with ever-changing digital systems, ranging from heavily-biased content (e.g. disinformation, hate speech), self-harm (e.g. eating disorders, self-cutting, suicide), cyber crime (e.g. cyberbullying, harassment, promotion of and recruitment for extreme causes (e.g. terrorist organizations), to demographic-specific exploitation (e.g. child-inappropriate content, social engineering attacks) (HM, 2019; Pater and Mynatt, 2017; Honary et al., 2020; Pater et al., 2019) . Though interface modification frameworks exist, the distribution of the interface modifications (interventions) are constrained to the development efforts of an intervention developer, the availability of interventions are skewed towards desktop browsers (much sparser on mobile), and the efforts are highly interface-specific (an app version update breaks code-based modification; videos cannot be perturbed in real-time). Moreover, low-code development platforms, that enable end-users to use visual tools to construct programs, are mostly available for software creation, but few options exist for software modification.", "cite_spans": [ { "start": 639, "end": 649, "text": "(HM, 2019;", "ref_id": "BIBREF30" }, { "start": 650, "end": 673, "text": "Pater and Mynatt, 2017;", "ref_id": "BIBREF52" }, { "start": 674, "end": 694, "text": "Honary et al., 2020;", "ref_id": "BIBREF31" }, { "start": 695, "end": 714, "text": "Pater et al., 2019)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Related Works & Problem", "sec_num": "2" }, { "text": "Due to the non-uniform distribution of users, the diverging distribution of harms would require a wide distribution of interventions. We hypothesize we can generate this matching distribution of interventions by enabling end-users to render personalized interventions by themselves (i.e. removing intervention developers from the ecosystem). To test this hypothesis, we attempt to bind the harms landscape to the interventions landscape by developing a collaborative human-in-the-loop system where end-users can inspect their browsing history and generate corresponding interventions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works & Problem", "sec_num": "2" }, { "text": "We pursue a visual overlay modifications approach, extending on the work of GreaseTerminator . The framework renders overlay graphics over an underlay screen based on detected GUI elements, images or text (as opposed to implementing program code changes natively), hence changes the interface rather than the functionality of the software. To provide end-users with input for self-reflection Lyngs et al., 2020a) and source data for generating interventions, users can be shown their screenome (Reeves et al., 2020 (Reeves et al., , 2021 , a record of a user's digital experiences represented as a sequence of screen images that they view and interact with over time To connect the input (screenome) and output (in-tervention), Human-in-the-Loop (HITL) learning can be used for users to annotate their screenomes for harmful text, images or GUI elements, and these annotations can be used to develop interventions. Wu et al. (2021) offers a detailed review of HITL. Specifically, the procedure to generate interventions using visual overlays will require the one-shot detection of masks (e.g. GUI elements) and few-shot learning and/or model fine-tuning of image/text classification models. System requirements: Based on the problem and our collaborative HITL interface modification approach, we establish the following technical requirement (Requirement 1) and systemic requirement (Requirement 2) for our framework:", "cite_spans": [ { "start": 392, "end": 412, "text": "Lyngs et al., 2020a)", "ref_id": null }, { "start": 494, "end": 514, "text": "(Reeves et al., 2020", "ref_id": "BIBREF57" }, { "start": 515, "end": 537, "text": "(Reeves et al., , 2021", "ref_id": "BIBREF56" }, { "start": 915, "end": 931, "text": "Wu et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Works & Problem", "sec_num": "2" }, { "text": "1. (Req 1) A complete feedback loop between user input (train-time) and interface re-render (test-time). 2. (Req 2) Prospects for scalability across the distribution of interface modifications (with respect to both harms landscape and rendering landscape).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Works & Problem", "sec_num": "2" }, { "text": "We define the GreaseVision architecture, with which end-users (system administrators) interact with, as follows (Figure 6(b)): (i) the user logs into the GreaseVision system to access amongst a set of personal emulators and interventions (the system admin has provisioned a set of emulated devices, hosted on a server through a set of virtual machines or docker containers for each emulator/interface, and handling streaming of the emulators, handling pre-requisites for the emulators, handling data migrations, etc); (ii) the user selects their desired interventions and continues browsing on their interfaces; (iii) after a time period, the user accesses their screenome and annotates interface elements, graphics, or text that they would like to generate interventions off of, which then re-populate the list of interventions available to members in a network.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture: Binding the Harms Ecosystem to the Interventions Ecosystem", "sec_num": "3.1" }, { "text": "In addition to the contributions stated in Section 1, GreaseVision is an improved visual overlay modification approach with respect to interfaceagnosticity and ease of use. We discuss the specific aspects of GreaseTerminator we adopt in Grease-Vision (hooks and overlays), and the technical improvements upon GreaseTerminator in Appendix: Section 6.1, specifically latency, device support, and interface-agnosticity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture: Binding the Harms Ecosystem to the Interventions Ecosystem", "sec_num": "3.1" }, { "text": "In our current implementation, the user accesses a web application (compatible with both desktop and mobile browsers). With their login credentials, the database loads the corresponding mapping of the user's virtual machines/containers that are shown in the interface selection page. The central server carries information on accessing a set of emulated devices (devices loaded locally on the central server in our setup). Each emulator is rendered in docker containers or virtual machines where input commands can be redirected. The database also loads the corresponding mapping of available interventions (generated by the user, or by the network of users) in the interventions selection page. The database also loads the screenomes (images of all timestamped, browsed interfaces) in the screenome visualization page. Primary input commands for both desktop and mobile are encoded, including keystroke entry (hardware keyboard, onscreen keyboard), mouse/touch input (scrolling, swiping, pinching, etc); input is locked to the coordinates of the displayed screen image on the web app (to avoid stray/accidental input commands), and the coordinates correspond to each virtual machine/container's display coordinates. Screen images are captured at a configurable framerate (we set it to 60FPS), and the images are stored under a directory mapped to the user. Generated masks and fine-tuned models are stored under an interventions directory and their intervention/file access is also locked by mapped users. Interventions are applied sequentially upon a screen image to return a perturbed/new image, which then updates the screen image shown on the client web app.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System Architecture: Binding the Harms Ecosystem to the Interventions Ecosystem", "sec_num": "3.1" }, { "text": "We make use of the three hooks from GreaseTerminator (text, mask, and model hooks), and link it with the screenome visualization tool. While in GreaseTerminator the hooks ease the intervention development process for intervention developers with previous programming knowledge, we further generalize the intervention development process for intervention developers to the extent that even an end-user can craft their own interventions without developer support nor expert knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-code Development: Binding Screenomes to Interface Modifications", "sec_num": "3.2" }, { "text": "GreaseTerminator enables intervention generation (via hooks) and interface re-rendering (via overlays). The added GreaseVision contribution of con- Figure 3 : Screenome visualization page: The page offers the end-user the ability to traverse through the sequence of timestamped screen images which compose their screenome. They can use bounding boxes to highlight GUI elements, images or text. They can label these elements with specific encodings, such as mask-or text-.", "cite_spans": [], "ref_spans": [ { "start": 148, "end": 156, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Low-code Development: Binding Screenomes to Interface Modifications", "sec_num": "3.2" }, { "text": "necting these components with HITL learning and screenome visualization to replace developers is what exemplifies end-user autonomy and scalability in personalized interventions. An intersecting data source that enables both end-user self-reflection Lyngs et al., 2020a) and interface re-rendering via overlay is the screenome. Specifically, we can orchestrate a loop that receives input from users and generates outputs for users. Through GreaseVision, end-users can browse through their own screen history, and beyond self-analysis, they can constructively build interface modifications to tackle specific needs. Extending on the interface rendering approach of overlays and hook-based intervention development, a generalizable design pattern for GreaseTerminator-based interventions is observed, where current few-shot/fine-tuning techniques can reasonably approach many digital harms, given appropriate extensions to the enduser development suite. In the current development suite (Figure 3) , an end-user can inspect their screenomes across all GreaseVision-enabled interfaces (ranging from iOS, Android to desktops), and make use of image segment highlighting techniques to annotate interface patterns to detect (typically UI elements or image/text) and subsequently intervene against these interface patterns. Specifically, the interface images being stored and mapped to a user is shown in time-series sequence to the user. The user can go through the sequence of images to reflect on their browsing behavior. The current implementation focuses on one-shot detection of masks and fine-tuning of image and text classifica-tion models. When the user identifies a GUI element they do not wish to see across interfaces and apps, they highlight the region of the image, and annotate it as mask-, and the mask hook will store a mask of intervention , which will then populate a list of available interventions with this option, and the user can choose to activate it during a browsing session. When a user identifies text (images) that they do not wish to see of similar variations, they can highlight the text (image) region, and annotate it as text- (image-). The text hook will extract the text via OCR, and fine-tune a pretrained text classification model specifically for this type of text . For images, the highlighted region will be cropped as input to finetune a pretrained image classification model. The corresponding text (image) occlusion intervention will censor similar text (images) during the user's browsing sessions if activated.", "cite_spans": [ { "start": 250, "end": 270, "text": "Lyngs et al., 2020a)", "ref_id": null } ], "ref_spans": [ { "start": 985, "end": 995, "text": "(Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "Low-code Development: Binding Screenomes to Interface Modifications", "sec_num": "3.2" }, { "text": "Extending on model few-shot training and finetuning, we can scale the accuracy of the models, not just through improvements to these training methods, but also by improving the data collection dynamics. More specifically, based on the spectrum of personalized and overlapping intervention needs for a distribution of users, we can leverage model-human and human-human collaboration to scale the generation of mask and model interventions. In the case of mask hooks, end-users who encounter certain harmful GUI elements (perhaps due to exposure to specific apps or features prior to other users) can tag and share the mask intervention with other users collaboratively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-code Development: Binding Screenomes to Interface Modifications", "sec_num": "3.2" }, { "text": "To collaboratively fine-tune models, users tag text based on a general intervention/category label, that is used to group text together to form a mini-dataset to fine-tune the model. An example of this would be a network of users highlighting racist text they come across in their screenomes that made them uncomfortable during their browsing sessions, and tagging them as text-racist, which aggregates more sentences to fine-tune a text classification model responsible for detecting/classifying text as racist or not, and subsequently occluding the text for the network of users during their live browsing sessions. The current premise is that users in a network know a ground-truth label of the category of the specific text they wish to detect and occlude, and the crowd-sourced text of each of N categories will yield corresponding N fine-tuned models. Collaborative labelling scales the rate in which text of a specific category can be acquired, reducing the burden on a single user while also diversifying the fine-tune training set, while also proliferating the fine-tuned models across a network of users and not wasting effort re-training already fine-tuned models of other users (i.e. increasing scalability of crafting and usage of interventions).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Low-code Development: Binding Screenomes to Interface Modifications", "sec_num": "3.2" }, { "text": "We evaluate the usability of (Req 1) the HITL component (usability for a single user with respect to inputs/outputs; or \"does our system help generate interventions?\"), and (Req 2) the collaborative component (improvement to usability for a single user when multiple users are involved; or \"does our system scale with user numbers?\") with cognitive walkthroughs and scalability tests respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "4" }, { "text": "Qualitatively, we perform a cognitive walkthrough (John and Packer, 1995; Rieman et al., 1995) of the user experience to simulate the cognitive process and explicit actions taken by an enduser during usage of GreaseVision to access interfaces and craft interventions. In our walkthrough, we as researchers presume the role of an end-user. We state the walkthrough step in bold, data pertaining to the task in italics, and descriptive evaluation in normal font. To evaluate the process of constructing an intervention using our proposed HITL system, we inspect the completion of a set of required tasks based on criteria from Parasura-man et al.'s (Parasuraman et al., 2000) 4 types of automation applications, which aim to measure the role of automation in the harms self-reflection and intervention self-development process. The four required tasks to be completed are:", "cite_spans": [ { "start": 50, "end": 73, "text": "(John and Packer, 1995;", "ref_id": "BIBREF35" }, { "start": 74, "end": 94, "text": "Rieman et al., 1995)", "ref_id": "BIBREF58" }, { "start": 647, "end": 673, "text": "(Parasuraman et al., 2000)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Cognitive Walkthrough", "sec_num": "4.1" }, { "text": "1. Information Acquisition: Could a user collect new data points to be used in intervention crafting? 2. Information Analysis: Could a user analyze interface data to inform them of potential harms and interventions? 3. Decision & Action Selection: Could a user act upon the analyzed information about the harms they are exposed to, and develop interventions? 4. Action Implementation: Could a user deploy the intervention in future browsing sessions?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cognitive Walkthrough", "sec_num": "4.1" }, { "text": "User logs in ( Figure 1a ): The user enters their username and password. These credentials are stored in a database mapped to a specific (set of) virtual machine(s) that contain the interfaces the user registered for access. This is a standard step for any secured or personalized system, where a user is informed they are accessing data and information that is tailored for their own usage.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 24, "text": "Figure 1a", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Cognitive Walkthrough", "sec_num": "4.1" }, { "text": "User selects active interface and interventions ( Figure 1b) : The user is shown a set of available interventions, be it contributed by themselves or other users in a network. They select their target interventions, and select an interface to access during this session. Based on their own configurations (e.g. GreaseVision set up locally on their own computer, or specific virtual machines set up for the required interfaces), users can view the set of interfaces that they can access and use to facilitate their digital experiences. The interface is available 24/7, retains all their personal data and storage, is recording their screenome data for review, and accessible via a web browser from any other device/platform. They are less constrained by the hardware limitations of their personal device, and just need to ensure the server of the interfaces has sufficient compute resources to host the interface and run the interventions. The populated interventions are also important to the user, as it is a marketplace and ecosystem of personalized and shareable interventions. Users can populate interventions that they themselves can generate through the screenome visualization tool, or access interventions collaboratively trained and contributed by multiple members in their network. The interventions are also modu- ", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 60, "text": "Figure 1b)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Cognitive Walkthrough", "sec_num": "4.1" }, { "text": "\u2713 \u2713 - - -Linkedin 1 \u2713 \u2713 - - -Instagram 1 \u2713 \u2713 - - Metrics/Sharing bar -Facebook 2 \u2713 \u2713 \u2713 \u2713 -Instagram 2 \u2713 \u2713 \u2713 \u2713 -Twitter 2 \u2713 \u2713 \u2713 \u2713 -YouTube 2 \u2713 \u2713 \u2713 \u2713 -TikTok 2 \u2713 \u2713 \u2713 \u2713 Recommended items -Twitter 2 \u2713 \u2713 \u2713 \u2713 -Facebook 2 \u2713 \u2713 \u2713 \u2713", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cognitive Walkthrough", "sec_num": "4.1" }, { "text": "Baseline 1 user, 5 sents/day 5 users, 5 sents/day 1 user, 10 sents/day 5 users, 10 sents/day = 10 user, 5 sents/day 10 user, 10 sents/day Figure 5 : Convergence of few-shot/fine-tuned models on sub-groups of hate speech lar enough that users are not restricted to a specific combination of interventions, and are applied sequentially onto the interface without mismatch in latency between the overlay and underlying interface. As the capabilities of generating interventions (e.g. more hooks) and rendering interfaces (e.g. interface augmentation) become extended, so do their ability to personalize their digital experience, and generate a distribution of digital experiences to match a similarly wide distribution of users. The autonomy to deploy interventions, with enhanced optionality through community-contributed interventions, before usage of an interface satisfies Task 4.", "cite_spans": [], "ref_spans": [ { "start": 138, "end": 146, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Test accuracy", "sec_num": null }, { "text": "The user accesses the interface and browses ( Figure 1c ): The user begins usage of the interface through the browser from their desired host device, be it mobile or desktop. They enter input to the system, which is streamed to the virtual machine(s), and interventions render overlay graphics to make any required interface modifications. After the user has chosen their desired interventions, the user will enjoy an improved digital experience through the lack of exposure to certain digital elements, such as undesired text or GUI elements. The altered viewing experience satisfies both Task 1 and 4; not only is raw screen data being collected, but the screen is being altered by deployed interventions in the wild. The user cannot be harmed by what they previously chose not to see, and what they do see but no longer wish to see in the future, they can annotate to remove in future viewings in the screenome visualization tool. It is a cyclical loop where users can redesign and self-improve their browsing experiences through the use of unilateral or user-driven tools.", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 55, "text": "Figure 1c", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Test accuracy", "sec_num": null }, { "text": "The user browses their screenome to generate interventions (Figure 3) : After a browsing period, the user may opt to browse and view their personal screenome. They enter the screenome visualization page to view recorded intervals of their browsing activity across all interfaces, and they can choose to annotate certain regions (image or text) to generate interventions to re-populate the interventions available. The user is given autonomy in selecting and determining what aspects of the interface, be it the static app interface of dynamic content provisioned, that they no longer wish to see in the future. Enabling the user to view their screenome across all used digital interfaces (extending to mobile and desktop) to self-reflect and analyze browsing or content patterns satisfies Task 2. Though the screenome provides the user raw historical data, it may require additional processing (e.g. automated analysis, charts) to avoid information overload. Rather than waiting for a feedback loop for the app/platform developers or altruistic intervention developers to craft broad-spectrum interventions that may or may not fit their personal needs, the end-user can enjoy a personalized loop of crafting and deploying interventions, almost instantly for certain interventions such as element masks. The user can enter metadata pertaining to each highlighted harm, and not only contribute to their own experience improvement, but also contribute to the improvement of others who may not have encountered or annotated the harm yet. By developing interventions based on their analysis, not only for themselves but potentially for other users, they could successfully achieve Task 3. Though previously-stated as out of scope, to further support Task 3, other potential intervention modalities such as augmentation could also be contributed by a community of professional intervention developers/researchers (who redirect efforts from individual interventions towards enhancing low-code development tools).", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 69, "text": "(Figure 3)", "ref_id": null } ], "eq_spans": [], "section": "Test accuracy", "sec_num": null }, { "text": "The four tasks, used to determine whether a complete feedback loop between input collection/processing and interface rendering through HITL by a single user, could all be successfully completed, thus GreaseVision satisfies Requirement 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Test accuracy", "sec_num": null }, { "text": "To evaluate the collaborative component, we measure the improvement to the user experience of a single user through the efforts of multiple users. We evaluate through scalability testing (Meerts and Graham, 2010) , a type of load testing that measures a system's ability to scale with respect to the number of users. We simulate the usage of the system to evaluate the scalable generation of oneshot graphics (mask) detection, and scalable finetuning/few-shot training of (text) models. We do not replicate the scalability analysis on real users: the fine-tuning mechanism is still the same, and the main variable (in common) is the sentences highlighted (and their assigned labels and metadata, as well as the quality of the annotations), though error is expectedly higher in the real-world as the data may be sampled differently and of lower annotation quality. The primary utility of collaboration to an individual end-user is the scaled reduction of effort in intervention development. We evaluate this in terms of variety of individualized interventions (variations of masks), and the time saved in constructing a single robust intervention (time needed to construct an accurate model intervention).", "cite_spans": [ { "start": 187, "end": 212, "text": "(Meerts and Graham, 2010)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Scalability Testing", "sec_num": "4.2" }, { "text": "Breadth of interface-agnostic masks (Table 1) : We investigate the ease to annotate graphicallyconsistent GUI elements for few-shot detection. We sample elements to occlude that can exist across a variety of interfaces. We evaluate the occlusion of the stories bar (pre-dominantly only found on mobile devices, not desktop/browsers); some intervention tools exist on Android (Happening, 2021; MaaarZ, 2019; Kollnig et al., 2021; and iOS (Friendly, 2022), though the tools are app-(and version-) specific. We evaluate the occlusion of like/share metrics; there are mainly desktop browser intervention tools (Grosser, 2012 (Grosser, , 2018 (Grosser, , 2019 hidelikes, 2022) , and one Android intervention tool . We evaluate the occlusion of recommendations; there are intervention tools that remove varying extents of the interface on browsers (such as the entire newsfeed) (West, 2012; Unhook, 2022) . Existing implementations and interest in such interventions indicate some users have overlapping interests in tackling the removal or occlusion of such GUI elements, though the implementations may not exist across all interface platforms, and may not be robust to version changes. For each intervention, we evaluate on a range of target (emulated) interfaces. We aim for the real-time occlusion of the specific GUI element, and evaluate on native apps (for Android and iOS) and browsers (Android mobile browser, and Linux desktop browser).", "cite_spans": [ { "start": 408, "end": 429, "text": "Kollnig et al., 2021;", "ref_id": "BIBREF7" }, { "start": 607, "end": 621, "text": "(Grosser, 2012", "ref_id": null }, { "start": 622, "end": 638, "text": "(Grosser, , 2018", "ref_id": null }, { "start": 639, "end": 655, "text": "(Grosser, , 2019", "ref_id": null }, { "start": 656, "end": 672, "text": "hidelikes, 2022)", "ref_id": null }, { "start": 873, "end": 885, "text": "(West, 2012;", "ref_id": "BIBREF66" }, { "start": 886, "end": 899, "text": "Unhook, 2022)", "ref_id": null } ], "ref_spans": [ { "start": 36, "end": 46, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Scalability Testing", "sec_num": "4.2" }, { "text": "For each of the GUI element cases, we make use of the screenome visualization tool to annotate and tag the minimum number of masks of the specific elements we wish to block across a set of apps. There tend to be small variations in the design of the element between browsers and mobile, hence we tend to require at least 1 mask from each device type; Android and iOS apps tend to have similar enough GUI elements that a single mask can be reused between them. We tabulate in Table 1 the successful generation and real-time occlusion of all evaluated and applicable GUI elements. We append screenshots of the removal of recommended items from the Twitter and Instagram apps on Android ( Figure 2(a,b) ). We append screenshots of the demetrification (occlusion of like/share buttons and metrics) of YouTube across desktop browsers (Ma-cOS) and mobile browsers (Android, iOS) ( Figure 4 ).", "cite_spans": [], "ref_spans": [ { "start": 475, "end": 482, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 686, "end": 699, "text": "Figure 2(a,b)", "ref_id": null }, { "start": 875, "end": 884, "text": "Figure 4", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Scalability Testing", "sec_num": "4.2" }, { "text": "Convergence of few-shot/fine-tune trained text models ( Figure 5) : We investigate the accuracy gains from fine-tuning pretrained text models as a function of user numbers and annotated sentence contributions. Specifically, we evaluate the text censoring of hate speech, where the primary form of mitigation is still community standard guidelines and platform moderation, with a few end-user tooling available (Bodyguard, 2019; . The premise of this empirical evaluation is that we have a group of simualated users N who each contribute N inputs (sentences) of a specific target class (hate speech, specifically against women) per timestep. With respect to a baseline, which is a pretrained model fine-tuned with all available sentences against women from a hate speech dataset, we wish to observe how the test accuracy of a model fine-tuned with M \u00d7 N sentences varies over time. Our source of hate speech for evaluation is the Dynamically Generated Hate Speech Dataset (Vidgen et al., 2021) , which contains sentences of non-hate and hate labels, and also classifies hate-labelled data by the target victim of the text (e.g. women, muslim, jewish, black, disabled). As we expect the M users to be labelling a specific niche of hate speech to censor, we specify the subset of hate speech of women (train set count: 1,652; test set count: 187). We fine-tune a publiclyavailable, pre-trained RoBERTa model (Hugging-Face, 2022; Liu et al., 2019), which was trained on a large corpus of English data (Wikipedia (Wikimedia), BookCorpus (Zhu et al., 2015) ). For each constant number of users M and constant sentence sampling rate N , at each timestep t, M \u00d7 N \u00d7 t sentences are acquired of class hate against target women; there are a total of 1,652 train set sentences under these constraints (i.e. the max number of sentences that can be acquired before it hits the baseline accuracy), and to balance the class distribution, we retain all 15,184 train set non-hate sentences. We evaluate the test accuracy of the fine-tuned model on all 187 test set women-targeted hate speech. We also vary M and N to observe sensitivity of these parameters to the convergence towards baseline test accuracy.", "cite_spans": [ { "start": 410, "end": 427, "text": "(Bodyguard, 2019;", "ref_id": null }, { "start": 971, "end": 992, "text": "(Vidgen et al., 2021)", "ref_id": "BIBREF62" }, { "start": 1532, "end": 1550, "text": "(Zhu et al., 2015)", "ref_id": "BIBREF71" } ], "ref_spans": [ { "start": 56, "end": 65, "text": "Figure 5)", "ref_id": null } ], "eq_spans": [], "section": "Scalability Testing", "sec_num": "4.2" }, { "text": "The rate of convergence of a finetuned model is quicker when the number of users and contributed sentences per timestep both increase, approximately when we reach at least 1,000 sentences for the women hate speech category. The difference in convergence rates indicate that a collaborative approach to training can scale interventions development, as opposed to training text classification models from scratch and each user annotating text alone.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Scalability Testing", "sec_num": "4.2" }, { "text": "The empirical results for this section are stated in Table 1 and Figure 5 . The data and evaluations from the scalability tests indicate that the ease of mask generation and model fine-tuning, further catalyzed by performance improvements from more users, enable the scalable generation of interventions and their associated harms, thus GreaseVision satisfies Requirement 2.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 60, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 65, "end": 73, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Scalability Testing", "sec_num": "4.2" }, { "text": "To enable end-user autonomy over interface design, and the generation and proliferation of a distribution of harms and interventions to analyze and reflect upon, we contribute the novel interface modification framework GreaseVision. End-users can reflect and annotate with their digital browsing experiences, and collaboratively craft interface interventions with our HITL and visual overlay mechanisms. With respect to Requirements 1 and 2, we find that our GreaseVision framework allows for scalable yet personalized end-user development of interventions against element, image and textbased digital harms. We hope GreaseVision will enable researchers and end-users to study harms and interventions, and other interface modification use cases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "In response to the continued widespread presence of interface-based harms in digital systems, Datta et al. developed GreaseTerminator, a visual overlay modification method. This approach enables researchers to develop, deploy and study interventions against interface-based harms in apps. This is based on the observation that it used to be difficult in the past for researchers to study the efficacy of different intervention designs against harms within mobile apps (most previous approaches focused on desktop browsers). GreaseTerminator provides a set of 'hooks' that serve as templates for researchers to develop interventions, which are then deployed and tested with study participants. GreaseTerminator interventions usually come in the form of machine learning models that build on the provided hooks, automatically detect harms within the smartphone user interface at run-time, and choose appropriate interventions (e.g. a visual overlay to hide harmful content, or content warnings). The GreaseTerminator architecture is shown in Figure 6 (a) in contrast to the GreaseVision architecture. Technical improvements w.r.t. GreaseTerminator The improvements of GreaseVision with respect to GreaseTerminator are two-fold: (i) improvements to the framework enabling end-user development and harms mitigation (discussed in detail in Sections 4.2, 4.3, 5 and 6), and (ii) improvements to the technical architecture (which we discuss in this section). Our distinctive and non-trivial technical improvements to the GreaseTerminator architecture fall under namely latency, device support, and interface-agnosticity. GreaseTerminator requires the end-user device to be the host device, and overlays graphics on top. A downside of this is the non-uniformity of network latency between users (e.g. depending on the internet speed in their location) resulting in a potential mismatch in rendered overlays and underlying interface. With Grease-Vision, we send a post-processed/re-rendered image once to the end-user device's browser (stream buffering) and do not need to send any screen image from the host user device to a server, thus there is no risk of overlay-underlay mismatch and we even reduce network latency by half. Images are relayed through an HTTPS connection, with a download/upload speed \u223c 250Mbps, and each image sent by the server amounting to \u223c 1Mb). The theo-retical latency per one-way transmission should be 1\u00d71024\u00d78bits 250\u00d710 6 bits/s = 0.033ms. With each user at most requiring server usage of one NVIDIA GeForce RTX 2080, with reference to existing online benchmarks (Ignatov, 2021) the latency for 1 image (CNN) and text (LSTM) model would be 5.1ms and 4.8ms respectively. While the total theoretical latency for GreaseTerminator is (2 \u00d7 0.033 + 5), that of GreaseVision is (0.033 + 5) = 5.03ms. Another downside of GreaseTerminator is that it requires client-side software for each target platform. There would be pre-requisite OS requirements for the end-user device, where only versions of GreaseTerminator developed for each OS can be offered support (currently only for Android). GreaseVision streams screen images directly to a login-verified browser, allowing users to access desktop/mobile on any browser-supported device. Despite variations in the streaming architecture between Grea-seVision and GreaseTerminator, the interface modification framework (hooks and overlays) are retained, hence interventions (even those developed by end-users) from GreaseVision are compatible in GreaseTerminator. In addition to improvements to the streaming architecture to fulfil interfaceagnosticity, adapting the visual overlay modification framework into a collaborative HITL implementation further improves the ease-of-use for all stakeholders in the ecosystem. End-users do not need to root their devices, find intervention tools or even self-develop their own customized tools. We eliminate the need for researchers to craft interventions (as users self-develop autonomously) or develop their own custom experience sampling tools (as end-users/researchers can analyze digital experiences from stored screenomes). We also eliminate the need for intervention developers to learn a new technical framework or learn how to fine-tune models. Running emulators on docker containers and virtual machines on a (single) host server is feasible, and thus allows for the browser stream to be accessible cross-device without restriction, e.g. access iOS emulator on Android device, or macOS virtual machine on Windows device. Certain limitations are imposed on the current implementation, such as a lack of access to the device camera, audio, and haptics; however, these are not permanent issues, and engineered implementations exist where a virtual/emulated device can route and access the host device's input/output sources (VrtualApp, 2016). (b) The high-level architecture of GreaseVision, both as a summary of our technical infrastructure as well as one of the collaborative HITL interventions development approach.", "cite_spans": [ { "start": 2586, "end": 2601, "text": "(Ignatov, 2021)", "ref_id": null } ], "ref_spans": [ { "start": 1040, "end": 1048, "text": "Figure 6", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "GreaseTerminator", "sec_num": "6.1" }, { "text": "Hooks The text hook enables modifying the text that is displayed on the user's device. It is implemented through character-level optical character recognition (OCR) that takes the screen image as an input and returns a set of characters and their corresponding coordinates. The EAST text detection (Zhou et al., 2017) model detects text in images and returns a set of regions with text, then uses Tesseract (Google, 2007) to extract characters within each region containing text. The mask hook matches the screen image against a target template of multiple images. It is implemented with multi-scale multi-template matching by resizing an image multiple times and sampling different subimages to compare against each instance of mask in a masks directory (where each mask is a cropped screenshot of an interface element). We retain the default majority-pixel inpainting method for mask hooks (inpainting with the most common colour value in a screen image or target masked region). As many mobile interfaces are standardized or uniform from a design perspective compared to images from the natural world, this may work in many instances. The mask hook could be connected to rendering functions such as highlighting the interface element with warning labels, or image inpainting (fill in the removed element pixels with newly generated pixels from the background), or adding content/information (from other apps) into the inpainted region. Developers can also tweak how the mask hook is applied, for example using the multiscale multi-template matching algorithm with contourized images (shapes, colour-independent) or coloured images depending on whether the mask contains (dynamic) sub-elements, or using fewshot deep learning models if similar interface elements are non-uniform. A model hook loads any machine learning model to take any input and gen-erate any output. This allows for model embedding (i.e. model weights and architectures) to inform further overlay rendering. We can connect models trained on specific tasks (e.g. person pose detection, emotion/sentiment analysis) to return output given the screen image (e.g. bounding box coordinates to filter), and this output can then be passed to a pre-defined rendering function (e.g. draw filtering box).", "cite_spans": [ { "start": 298, "end": 317, "text": "(Zhou et al., 2017)", "ref_id": "BIBREF70" }, { "start": 407, "end": 421, "text": "(Google, 2007)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "GreaseTerminator", "sec_num": "6.1" }, { "text": "It is well-known that digital harms are widespread in our day-to-day technologies. Despite this, the academic literature around these harms is still developing, and it remains difficult to state exactly what the harms are that need to be addressed. Famously, Gray et al. (Gray et al., 2018) put forward a 5-class taxonomy to classify dark patterns within apps: interface interference (elements that manipulate the user interface to induce certain actions over other actions), nagging (elements that interrupt the user's current task with out-of-focus tasks) forced action (elements that introduce sub-tasks forcefully before permitting a user to complete their desired task), obstruction (elements that introduce subtasks with the intention of dissuading a user from performing an operation in the desired mode), and sneaking (elements that conceal or delay information relevant to the user in performing a task).", "cite_spans": [ { "start": 271, "end": 290, "text": "(Gray et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation: Pervasiveness and Individuality of Digital Harms", "sec_num": "6.2.1" }, { "text": "A challenge with such framework and taxonomies is to capture and understand the material impacts of harms on individuals. Harms tend to be highly individual and vary in terms of how they manifest within users of digital systems. The harms landscape is also quickly changing with ever-changing digital systems. Defining the spec-trum of harms is still an open problem, the range varying from heavily-biased content (e.g. disinformation, hate speech), self-harm (e.g. eating disorders, self-cutting, suicide), cyber crime (e.g. cyber-bullying, harassment, promotion of and recruitment for extreme causes (e.g. terrorist organizations), to demographic-specific exploitation (e.g. child-inappropriate content, social engineering attacks) (HM, 2019; Pater and Mynatt, 2017; Honary et al., 2020; Pater et al., 2019) , for which we recommend the aforementioned cited literature. The last line of defense against many digital harms is the user interface. This is why we are interested in interface-emergent harms in this paper, and how to support individuals in developing their own strategies to cope with and overcome such harms.", "cite_spans": [ { "start": 734, "end": 744, "text": "(HM, 2019;", "ref_id": "BIBREF30" }, { "start": 745, "end": 768, "text": "Pater and Mynatt, 2017;", "ref_id": "BIBREF52" }, { "start": 769, "end": 789, "text": "Honary et al., 2020;", "ref_id": "BIBREF31" }, { "start": 790, "end": 809, "text": "Pater et al., 2019)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation: Pervasiveness and Individuality of Digital Harms", "sec_num": "6.2.1" }, { "text": "Digital harms have long been acknowledged as a general problem, and a range of technical interventions against digital harms are developed. Interventions, also similarly called modifications or patches, are changes to the software, which result in a change in (perceived) functionality and enduser usage. We review and categorize key technical intervention methods for interface modification by end-users, with cited examples specifically for digital harms mitigation. While there also exist nontechnical interventions, in particular legal remedies, it is beyond this work to give a full account of these different interventions against harms; a useful framework for such an analysis is provided by Lawrence Lessig (Lessig) who characterised the different regulatory forces in the digital ecosystem.", "cite_spans": [ { "start": 715, "end": 723, "text": "(Lessig)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "Interface-code modifications (Kollnig et al., 2021; Higi, 2020; Jeon et al., 2012; Rasthofer et al., 2014; Davis and Chen, 2013; Backes et al., 2014; Xu et al., 2012; LuckyPatcher, 2020; Davis et al., 2012; Lyngs et al., 2020b; Freeman, 2020; rovo89, 2020; Agarwal and Hall, 2013; Enck et al., 2010; MaaarZ, 2019; VrtualApp, 2016 ) make changes to source code, either installation code (to modify software before installation), or run-time code (to modify software during usage). On desktop, this is done through browser extensions and has given rise to a large ecosystem of such extensions. Some of the most well-known interventions are ad blockers, and tools that improve productivity online (e.g. by removing the Facebook newsfeed (Lyngs et al., 2020b) ). On mobile, a prominent example is App-Guard (Backes et al., 2014) , a research project by Backes et al. that allowed users to improve the privacy properties of apps on their phone by making small, targeted modification to apps' source code. Another popular mobile solution in the community is the app Lucky Patcher (LuckyPatcher, 2020) that allows to get paid apps for free, by removing the code relating to payment functionality directly from the app code.", "cite_spans": [ { "start": 29, "end": 51, "text": "(Kollnig et al., 2021;", "ref_id": "BIBREF7" }, { "start": 52, "end": 63, "text": "Higi, 2020;", "ref_id": null }, { "start": 64, "end": 82, "text": "Jeon et al., 2012;", "ref_id": "BIBREF34" }, { "start": 83, "end": 106, "text": "Rasthofer et al., 2014;", "ref_id": "BIBREF55" }, { "start": 107, "end": 128, "text": "Davis and Chen, 2013;", "ref_id": "BIBREF9" }, { "start": 129, "end": 149, "text": "Backes et al., 2014;", "ref_id": "BIBREF4" }, { "start": 150, "end": 166, "text": "Xu et al., 2012;", "ref_id": "BIBREF68" }, { "start": 167, "end": 186, "text": "LuckyPatcher, 2020;", "ref_id": null }, { "start": 187, "end": 206, "text": "Davis et al., 2012;", "ref_id": "BIBREF11" }, { "start": 207, "end": 227, "text": "Lyngs et al., 2020b;", "ref_id": null }, { "start": 228, "end": 242, "text": "Freeman, 2020;", "ref_id": "BIBREF14" }, { "start": 243, "end": 256, "text": "rovo89, 2020;", "ref_id": null }, { "start": 257, "end": 280, "text": "Agarwal and Hall, 2013;", "ref_id": "BIBREF1" }, { "start": 281, "end": 299, "text": "Enck et al., 2010;", "ref_id": "BIBREF12" }, { "start": 300, "end": 313, "text": "MaaarZ, 2019;", "ref_id": null }, { "start": 314, "end": 329, "text": "VrtualApp, 2016", "ref_id": "BIBREF63" }, { "start": 734, "end": 755, "text": "(Lyngs et al., 2020b)", "ref_id": null }, { "start": 803, "end": 824, "text": "(Backes et al., 2014)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "Some of these methods may require the highest level of privilege escalation to make modifications to the operating system and other programs/apps as a root user. On iOS, Cydia Substrate (Freeman, 2020) is the foundation for jailbreaking and further device modification. A similar system, called Xposed Framework (rovo89, 2020), exists for Android. To alleviate the risks and challenges afflicted with privilege escalation, VirtualXposed (Vr-tualApp, 2016) create a virtual environment on the user's Android device with simulated privilege escalation. Users can install apps into this virtual environment and apply tools of other modification approaches that may require root access. Protect-MyPrivacy (Agarwal and Hall, 2013) for iOS and TaintDroid (Enck et al., 2010) for Android both extend the functionality of the smartphone operating system with new functionality for the analysis of apps' privacy features. On desktops, code modifications tend not to be centred around a common framework, but are more commonplace in general due to the traditionally more permissive security model compared to mobile. Antivirus tools, copyright protections of games and the modding of UI components are all often implemented through interface-code modifications.", "cite_spans": [ { "start": 186, "end": 201, "text": "(Freeman, 2020)", "ref_id": "BIBREF14" }, { "start": 701, "end": 725, "text": "(Agarwal and Hall, 2013)", "ref_id": "BIBREF1" }, { "start": 749, "end": 768, "text": "(Enck et al., 2010)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "Interface-external modifications (Geza, 2019; Bodyguard, 2019; Lee et al., 2014; Ko et al., 2015; Andone et al., 2016; Hiniker et al., 2016; L\u00f6chtefeld et al., 2013; Labs, 2019; Okeke et al., 2018) are the arguably most common way to change default interface behaviour. An end-user would install a program so as to affect other programs/apps. No change to the operating system or the targeted programs/apps is made, so an uninstall of the program providing the modification would revert the device to the original state. This approach is widely used to track duration of device usage, send notifications to the user during usage (e.g. timers, warnings), block certain actions on the user device, and other aspects. The HabitLab (Geza, 2019) is a prominent example developed by Kovacs et al. at Stanford. This modification framework is open-source and maintained by a community of developers, and provides interventions for both desktop and mobile.", "cite_spans": [ { "start": 33, "end": 45, "text": "(Geza, 2019;", "ref_id": "BIBREF17" }, { "start": 46, "end": 62, "text": "Bodyguard, 2019;", "ref_id": null }, { "start": 63, "end": 80, "text": "Lee et al., 2014;", "ref_id": "BIBREF40" }, { "start": 81, "end": 97, "text": "Ko et al., 2015;", "ref_id": "BIBREF36" }, { "start": 98, "end": 118, "text": "Andone et al., 2016;", "ref_id": "BIBREF3" }, { "start": 119, "end": 140, "text": "Hiniker et al., 2016;", "ref_id": "BIBREF29" }, { "start": 141, "end": 165, "text": "L\u00f6chtefeld et al., 2013;", "ref_id": "BIBREF43" }, { "start": 166, "end": 177, "text": "Labs, 2019;", "ref_id": null }, { "start": 178, "end": 197, "text": "Okeke et al., 2018)", "ref_id": "BIBREF50" }, { "start": 777, "end": 803, "text": "Kovacs et al. at Stanford.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "Visual overlay modifications render graphics on an overlay layer over any active interface instance, including browsers, apps/programs, videos, or any other interface in the operating system. The modifications are visual, and do not change the functionality of the target interface. It may render sub-interfaces, labels, or other graphics on top of the foreground app. Prominent examples are DetoxDroid (flxapps, 2021), Gray-Switch (GmbH, 2021), Google Accessibility Suite (Google, 2021) , and GreaseTerminator .", "cite_spans": [ { "start": 473, "end": 487, "text": "(Google, 2021)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "We would like to establish early on that we pursue a visual overlay modifications approach. Interventions should be rendered in the form of overlay graphics based on detected elements, rather than implementing program code changes natively, hence focused on changing the interface rather than the functionality of the software. Interventions should be generalizable; they are not solely website-or app-oriented, but interface-oriented. Interventions do not target specific apps, but general interface elements and patterns that could appear across different interface environments. To support the systemic requirements in Section 2.4, we require an interface modification approach that is (i) interface-agnostic and (ii) easy-to-use. To this extent, we build upon the work of GreaseTerminator , a framework optimized for these two requirements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "In response to the continued widespread presence of interface-based harms in digital systems, developed GreaseTerminator, a visual overlay modification method. This approach enables researchers to develop, deploy and study interventions against interface-based harms in apps. This is based on the observation that it used to be difficult in the past for researchers to study the efficacy of different intervention designs against harms within mobile apps (most previous approaches focused on desktop browsers). GreaseTerminator provides a set of 'hooks' that serve as templates for researchers to develop interventions, which are then deployed and tested with study participants. GreaseTerminator interventions usually come in the form of machine learning models that build on the provided hooks, automatically detect harms within the smartphone user interface at run-time, and choose appropriate interventions (e.g. a visual overlay to hide harmful content, or content warnings). A visualisation of the GreaseTerminator approach is shown in Figure 6(a) .", "cite_spans": [], "ref_spans": [ { "start": 1042, "end": 1053, "text": "Figure 6(a)", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Developments in Interface Modification & Re-rendering", "sec_num": "6.2.2" }, { "text": "Low-code development platforms have been defined, according to practitioners, to be (i) lowcode (negligible programming skill required to reach endgoal, potentially drag-and-drop), (ii) visual programming (a visual approach to development, mostly reliant on a GUI, and \"what-yousee-is-what-you-get\"), and (iii) automated (unattended operations exist to minimize human involvement) (Luo et al., 2021) . Low-code development platforms exist for varying stages of software creation, from frontend (e.g. App maker, Bubble.io, Webflow), to workflow (Airtable, Amazon Honeycode, Google Tables, UiPath, Zapier), to backend (e.g. Firevase, WordPress, flutterflow); none exist for software modification of existing applications across interfaces. According to a review of StackOverflow and Reddit posts analysed by Luo et al. (Luo et al., 2021) , low-code development platforms are cited by practitioners to be tools that enable faster development, lower the barrier to usage by non-technical people, improves IT governance compared to traditional programming, and even suits team development; one of the main limitations cited is that the complexity of the software created is constrained by the options offered by the platform. User studies have shown that users can selfidentify malevolent harms and habits upon selfreflection and develop desires to intervene against them Lyngs et al., 2020a) . Not only do end-users have a desire or interest in selfreflection, but there is indication that end-users have a willingness to act. Statistics for content violation reporting from Meta show that in the Jan-Jun 2021 period, \u223c 42,200 and \u223c 5,300 in-app content violations were reported on Facebook and Instagram respectively (Meta, 2022) (in this report, the numbers are specific to violations in local law, so the actual number with respect to community standard violatons would be much higher; the numbers also include reporting by governments/courts and non-government entities in addition to members of the public). Despite a willingness to act, there are limited digital visualization or reflection tools that enable flexible intervention development by end-users. There are visualization or reflection tools on browser and mobile that allow for reflection (e.g. device use time (Andone et al., 2016)), and there are separate and disconnected tools for intervention (Section 2.2), but there are limited offerings of flexible intervention development by end-users, where end-users can observe and analyze their problems while generating corresponding fixes, which thus prematurely ends the loop for action upon regret/reflection. There is a disconnect between the harms analysis ecosystem and interventions ecosystem. A barrier to binding these two ecosystems is the existence of low-code development platforms for end-users. While such tooling may exist for specific use cases on specific interfaces (e.g. web/app/game development) for mostly creationary purposes, there are limited options available for modification purposes of existing software, the closest alternative being extension ecosystems (Kollnig et al., 2021; Google, 2010a) . Low-code development platforms are in essence \"developer-less\", removing developers from the software modification pipeline by reducing the barrier to modification through the use of GUI-based features and negligible coding, such that end-users can self-develop without expert knowledge.", "cite_spans": [ { "start": 381, "end": 399, "text": "(Luo et al., 2021)", "ref_id": "BIBREF56" }, { "start": 806, "end": 835, "text": "Luo et al. (Luo et al., 2021)", "ref_id": "BIBREF56" }, { "start": 1367, "end": 1387, "text": "Lyngs et al., 2020a)", "ref_id": null }, { "start": 1714, "end": 1726, "text": "(Meta, 2022)", "ref_id": "BIBREF48" }, { "start": 3094, "end": 3116, "text": "(Kollnig et al., 2021;", "ref_id": "BIBREF7" }, { "start": 3117, "end": 3131, "text": "Google, 2010a)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Opportunities for Low-code Development in Interface Modification", "sec_num": "6.2.3" }, { "text": "Human-in-the-Loop (HITL) learning is the procedure of integrating human knowledge and experience in the augmentation of machine learning models. It is commonly used to generate new data from humans or annotate existing data by humans. Wallace et al. (Wallace et al., 2019 ) constructed a HITL system of an interactive interface where a human talks with a machine to generate more Q&A language and train/fine-tune Q&A models. Zhang et al. (Zhang et al., 2019 ) proposed a HITL system for humans to provide data for entity extraction, including requiring humans to formulate regular expressions and highlight text documents, and annotate and label data. For an extended literature review, we refer the reader to Wu et al. (Wu et al., 2021) . Beyond lab settings, HITL has proven itself in wide deployment, where a wide distribution of users have indicated a willingness and ability to perform tasks on a HITL annotation tool, re-CAPTCHA, to access utility and services. In 2010, Google reported over 100 million reCAPTCHA instances are displayed every day (Google, 2010b) to annotate different types of data, such as deciphering text for OCR of books or street signs, or labelling objects in images such as traffic lights or vehicles.", "cite_spans": [ { "start": 235, "end": 271, "text": "Wallace et al. (Wallace et al., 2019", "ref_id": "BIBREF64" }, { "start": 438, "end": 457, "text": "(Zhang et al., 2019", "ref_id": "BIBREF69" }, { "start": 720, "end": 737, "text": "(Wu et al., 2021)", "ref_id": null }, { "start": 1054, "end": 1069, "text": "(Google, 2010b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Opportunities for Low-code Development in Interface Modification", "sec_num": "6.2.3" }, { "text": "While HITL formulates the structure for human-AI collaborative model development, model finetuning and few-shot learning formulate the algorithmic methods of adapting models to changing inputs, environments, and contexts. Both adaptation approaches require the model to update its parameters with respect to the new input distribution. For model fine-tuning, the developer re-trains a pre-trained model on a new dataset. This is in contrast to training a model from a random initialization. Model fine-tuning techniques for pretrained foundation models, that already contain many of the pre-requisite subnetworks required for feature reuse and warm-started training on a smaller target dataset, have indicated robustness on downstream tasks (Galanti et al., 2022; Abnar et al., 2022; Neyshabur et al., 2020) . If there is an extremely large number of input distributions and few samples per distribution (small datasets), fewshot learning is an approach where the developer has separately trained a meta-model that learns how to change model parameters with respect to only a few samples. Few-shot learning has demonstrated successful test-time adaptation in updating model parameters with respect to limited test-time samples in both image and text domains (Raghu et al., 2020; Koch et al., 2015; Finn et al., 2017; Datta, 2021) . Some overlapping techniques even exist between few-shot learning and fine-tuning, such as constructing subspaces and optimizing with respect to intrinsic dimensions (Aghajanyan et al., 2021; Datta and Shadbolt, 2022; Simon et al., 2020) .", "cite_spans": [ { "start": 741, "end": 763, "text": "(Galanti et al., 2022;", "ref_id": "BIBREF16" }, { "start": 764, "end": 783, "text": "Abnar et al., 2022;", "ref_id": "BIBREF0" }, { "start": 784, "end": 807, "text": "Neyshabur et al., 2020)", "ref_id": "BIBREF49" }, { "start": 1258, "end": 1278, "text": "(Raghu et al., 2020;", "ref_id": null }, { "start": 1279, "end": 1297, "text": "Koch et al., 2015;", "ref_id": "BIBREF37" }, { "start": 1298, "end": 1316, "text": "Finn et al., 2017;", "ref_id": "BIBREF13" }, { "start": 1317, "end": 1329, "text": "Datta, 2021)", "ref_id": "BIBREF6" }, { "start": 1497, "end": 1522, "text": "(Aghajanyan et al., 2021;", "ref_id": "BIBREF2" }, { "start": 1523, "end": 1548, "text": "Datta and Shadbolt, 2022;", "ref_id": "BIBREF8" }, { "start": 1549, "end": 1568, "text": "Simon et al., 2020)", "ref_id": "BIBREF60" } ], "ref_spans": [], "eq_spans": [], "section": "Opportunities for Low-code Development in Interface Modification", "sec_num": "6.2.3" }, { "text": "The raw data for harms and required interface changes reside in the history of interactions between the user and the interface. In the Screenome project (Reeves et al., 2020 (Reeves et al., , 2021 , the investigators proposed the study and analysis of the moment-bymoment changes on a person's screen, by capturing screenshots automatically and unobtrusively every t = 5 seconds while a device is on. This record of a user's digital experiences represented as a sequence of screens that they view and interact with over time is denoted as a user's screenome. Though not mobilized widely amongst users for their self-reflection or personalized analysis, integrating screenomes into an interface modification framework can play the dual roles of visualizing raw (harms) data to users while manifesting as parseable input for visual overlay modification frameworks.", "cite_spans": [ { "start": 153, "end": 173, "text": "(Reeves et al., 2020", "ref_id": "BIBREF57" }, { "start": 174, "end": 196, "text": "(Reeves et al., , 2021", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Opportunities for Low-code Development in Interface Modification", "sec_num": "6.2.3" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Exploring the limits of large scale pre-training", "authors": [ { "first": "Samira", "middle": [], "last": "Abnar", "suffix": "" }, { "first": "Mostafa", "middle": [], "last": "Dehghani", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Neyshabur", "suffix": "" }, { "first": "Hanie", "middle": [], "last": "Sedghi", "suffix": "" } ], "year": 2022, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Samira Abnar, Mostafa Dehghani, Behnam Neyshabur, and Hanie Sedghi. 2022. Exploring the limits of large scale pre-training. In International Conference on Learning Representations.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "ProtectMyPrivacy: Detecting and mitigating privacy leaks on iOS devices using crowdsourcing", "authors": [ { "first": "Yuvraj", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Malcolm", "middle": [], "last": "Hall", "suffix": "" } ], "year": 2013, "venue": "Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services -MobiSys '13", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/2462456.2464460" ] }, "num": null, "urls": [], "raw_text": "Yuvraj Agarwal and Malcolm Hall. 2013. ProtectMyPri- vacy: Detecting and mitigating privacy leaks on iOS devices using crowdsourcing. In Proceeding of the 11th Annual International Conference on Mobile Systems, Applications, and Services -MobiSys '13, page 97, Taipei, Taiwan. ACM Press.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Intrinsic dimensionality explains the effectiveness of language model fine-tuning", "authors": [ { "first": "Armen", "middle": [], "last": "Aghajanyan", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "7319--7328", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.568" ] }, "num": null, "urls": [], "raw_text": "Armen Aghajanyan, Sonal Gupta, and Luke Zettle- moyer. 2021. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th Inter- national Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 7319-7328, Online. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Menthal: A framework for mobile data collection and analysis", "authors": [ { "first": "Konrad", "middle": [], "last": "Ionut Andone", "suffix": "" }, { "first": "Mark", "middle": [], "last": "B\u0142aszkiewicz", "suffix": "" }, { "first": "Boris", "middle": [], "last": "Eibes", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Trendafilov", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Montag", "suffix": "" }, { "first": "", "middle": [], "last": "Markowetz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, UbiComp '16", "volume": "", "issue": "", "pages": "624--629", "other_ids": { "DOI": [ "10.1145/2968219.2971591" ] }, "num": null, "urls": [], "raw_text": "Ionut Andone, Konrad B\u0142aszkiewicz, Mark Eibes, Boris Trendafilov, Christian Montag, and Alexander Markowetz. 2016. Menthal: A framework for mobile data collection and analysis. In Proceedings of the 2016 ACM International Joint Conference on Perva- sive and Ubiquitous Computing: Adjunct, UbiComp '16, page 624-629, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "AppGuard -Fine-Grained Policy Enforcement for Untrusted Android Applications", "authors": [ { "first": "Michael", "middle": [], "last": "Backes", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Gerling", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Hammer", "suffix": "" }, { "first": "Matteo", "middle": [], "last": "Maffei", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Von Styp-Rekowsky", "suffix": "" } ], "year": 2014, "venue": "Data Privacy Management and Autonomous Spontaneous Security", "volume": "8247", "issue": "", "pages": "213--231", "other_ids": { "DOI": [ "10.1007/978-3-642-54568-9_14" ] }, "num": null, "urls": [], "raw_text": "Michael Backes, Sebastian Gerling, Christian Ham- mer, Matteo Maffei, and Philipp von Styp-Rekowsky. 2014. AppGuard -Fine-Grained Policy Enforce- ment for Untrusted Android Applications. In Joaquin Garcia-Alfaro, Georgios Lioudakis, Nora Cuppens- Boulahia, Simon Foley, and William M. Fitzgerald, editors, Data Privacy Management and Autonomous Spontaneous Security, volume 8247 of Lecture Notes in Computer Science, pages 213-231. Springer Berlin Heidelberg, Berlin, Heidelberg.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Reflect, not regret: Understanding regretful smartphone use with app feature-level analysis", "authors": [ { "first": "Hyunsung", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Daeun", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Donghwi", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Wan", "middle": [ "Ju" ], "last": "Kang", "suffix": "" }, { "first": "Eun", "middle": [ "Kyoung" ], "last": "Choe", "suffix": "" }, { "first": "Sung-Ju", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2021, "venue": "Proc. ACM Hum.-Comput. Interact", "volume": "5", "issue": "CSCW2", "pages": "", "other_ids": { "DOI": [ "10.1145/3479600" ] }, "num": null, "urls": [], "raw_text": "Hyunsung Cho, DaEun Choi, Donghwi Kim, Wan Ju Kang, Eun Kyoung Choe, and Sung-Ju Lee. 2021. Reflect, not regret: Understanding regretful smart- phone use with app feature-level analysis. Proc. ACM Hum.-Comput. Interact., 5(CSCW2).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learn2weight: Weights transfer defense against similar-domain adversarial attacks", "authors": [ { "first": "Siddhartha", "middle": [], "last": "Datta", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhartha Datta. 2021. Learn2weight: Weights trans- fer defense against similar-domain adversarial at- tacks.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Mind-proofing your phone: Navigating the digital minefield with greaseterminator", "authors": [ { "first": "Siddhartha", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Konrad", "middle": [], "last": "Kollnig", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Shadbolt", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Siddhartha Datta, Konrad Kollnig, and Nigel Shad- bolt. 2021. Mind-proofing your phone: Navigating the digital minefield with greaseterminator. CoRR, abs/2112.10699.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Lowloss subspace compression for clean gains against multi-agent backdoor attacks", "authors": [ { "first": "Siddhartha", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Nigel", "middle": [], "last": "Shadbolt", "suffix": "" } ], "year": 2022, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2203.03692" ] }, "num": null, "urls": [], "raw_text": "Siddhartha Datta and Nigel Shadbolt. 2022. Low- loss subspace compression for clean gains against multi-agent backdoor attacks. arXiv preprint arXiv:2203.03692.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "RetroSkeleton: Retrofitting android apps", "authors": [ { "first": "Benjamin", "middle": [], "last": "Davis", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2013, "venue": "Proceeding of the 11th", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/2462456.2464462" ] }, "num": null, "urls": [], "raw_text": "Benjamin Davis and Hao Chen. 2013. RetroSkeleton: Retrofitting android apps. In Proceeding of the 11th", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Annual International Conference on Mobile Systems, Applications, and Services -MobiSys '13", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual International Conference on Mobile Systems, Applications, and Services -MobiSys '13, page 181, Taipei, Taiwan. ACM Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "I-arm-droid: A rewriting framework for in-app reference monitors for android applications", "authors": [ { "first": "Benjamin", "middle": [], "last": "Davis", "suffix": "" }, { "first": "S", "middle": [], "last": "Ben", "suffix": "" }, { "first": "Armen", "middle": [], "last": "Khodaverdian", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Mobile Security Technologies 2012, MOST '12", "volume": "", "issue": "", "pages": "1--9", "other_ids": {}, "num": null, "urls": [], "raw_text": "Benjamin Davis, Ben S, Armen Khodaverdian, and Hao Chen. 2012. I-arm-droid: A rewriting framework for in-app reference monitors for android applications. In In Proceedings of the Mobile Security Technolo- gies 2012, MOST '12., pages 1-9, New York, NY, United States. IEEE.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "TaintDroid: An Informationflow Tracking System for Realtime Privacy Monitoring on Smartphones", "authors": [ { "first": "William", "middle": [], "last": "Enck", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Gilbert", "suffix": "" }, { "first": "Byung-Gon", "middle": [], "last": "Chun", "suffix": "" }, { "first": "Landon", "middle": [ "P" ], "last": "Cox", "suffix": "" }, { "first": "Jaeyeon", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Mcdaniel", "suffix": "" }, { "first": "N", "middle": [], "last": "Sheth", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 9th USENIX Conference on Operating Systems Design and Implementation, OSDI'10", "volume": "", "issue": "", "pages": "393--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Enck, Peter Gilbert, Byung-Gon Chun, Lan- don P. Cox, Jaeyeon Jung, Patrick McDaniel, and Anmol N. Sheth. 2010. TaintDroid: An Information- flow Tracking System for Realtime Privacy Mon- itoring on Smartphones. In Proceedings of the 9th USENIX Conference on Operating Systems De- sign and Implementation, OSDI'10, pages 393-407, Berkeley, CA, United States. USENIX Association.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "authors": [ { "first": "Chelsea", "middle": [], "last": "Finn", "suffix": "" }, { "first": "Pieter", "middle": [], "last": "Abbeel", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Levine", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Cydia substrate", "authors": [ { "first": "Jay", "middle": [], "last": "Freeman", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jay Freeman. 2020. Cydia substrate.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "App Studio Friendly. 2022. Friendly social browser", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "App Studio Friendly. 2022. Friendly social browser.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "On the role of neural collapse in transfer learning", "authors": [ { "first": "Tomer", "middle": [], "last": "Galanti", "suffix": "" }, { "first": "Andr\u00e1s", "middle": [], "last": "Gy\u00f6rgy", "suffix": "" }, { "first": "Marcus", "middle": [], "last": "Hutter", "suffix": "" } ], "year": 2022, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tomer Galanti, Andr\u00e1s Gy\u00f6rgy, and Marcus Hutter. 2022. On the role of neural collapse in transfer learn- ing. In International Conference on Learning Repre- sentations.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "HabitLab: In-The-Wild Behavior Change Experiments at Scale", "authors": [ { "first": "", "middle": [], "last": "Kovacs Geza", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kovacs Geza. 2019. HabitLab: In-The-Wild Behavior Change Experiments at Scale. Stanford Department of Computer Science.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Chrome web store", "authors": [ { "first": "", "middle": [], "last": "Google", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Google. 2010a. Chrome web store.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Google. 2010b. recaptcha faq", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Google. 2010b. recaptcha faq.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Android accessibility suite", "authors": [ { "first": "", "middle": [], "last": "Google", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Google. 2021. Android accessibility suite.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The dark (patterns) side of ux design", "authors": [ { "first": "Colin", "middle": [ "M" ], "last": "Gray", "suffix": "" }, { "first": "Yubo", "middle": [], "last": "Kou", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Battles", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Hoggatt", "suffix": "" }, { "first": "Austin", "middle": [ "L" ], "last": "Toombs", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.1145/3173574.3174108" ] }, "num": null, "urls": [], "raw_text": "Colin M. Gray, Yubo Kou, Bryan Battles, Joseph Hog- gatt, and Austin L. Toombs. 2018. The dark (pat- terns) side of ux design. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI '18, page 1-14, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Studios Happening. 2021. Swipe for facebook. hidelikes. 2022. Hide likes", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Studios Happening. 2021. Swipe for facebook. hidelikes. 2022. Hide likes.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Mytime: Designing and evaluating an intervention for smartphone non-use", "authors": [ { "first": "Alexis", "middle": [], "last": "Hiniker", "suffix": "" }, { "first": "(", "middle": [], "last": "Sungsoo", "suffix": "" }, { "first": ")", "middle": [], "last": "Ray", "suffix": "" }, { "first": "Tadayoshi", "middle": [], "last": "Hong", "suffix": "" }, { "first": "Julie", "middle": [ "A" ], "last": "Kohno", "suffix": "" }, { "first": "", "middle": [], "last": "Kientz", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI '16", "volume": "", "issue": "", "pages": "4746--4757", "other_ids": { "DOI": [ "10.1145/2858036.2858403" ] }, "num": null, "urls": [], "raw_text": "Alexis Hiniker, Sungsoo (Ray) Hong, Tadayoshi Kohno, and Julie A. Kientz. 2016. Mytime: Designing and evaluating an intervention for smartphone non-use. In Proceedings of the 2016 CHI Conference on Hu- man Factors in Computing Systems, CHI '16, page 4746-4757, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Online Harms White Paper", "authors": [ { "first": "H", "middle": [ "M" ], "last": "Government", "suffix": "" } ], "year": 2019, "venue": "Government Report on Transparency Reporting", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Government HM. 2019. Online Harms White Paper. Government Report on Transparency Reporting.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Shaping the design of smartphone-based interventions for self-harm", "authors": [ { "first": "Mahsa", "middle": [], "last": "Honary", "suffix": "" }, { "first": "Beth", "middle": [], "last": "Bell", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Clinch", "suffix": "" }, { "first": "Julio", "middle": [], "last": "Vega", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Kroll", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Sefi", "suffix": "" }, { "first": "Roisin", "middle": [], "last": "Mcnaney", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI '20", "volume": "", "issue": "", "pages": "1--14", "other_ids": { "DOI": [ "10.1145/3313831.3376370" ] }, "num": null, "urls": [], "raw_text": "Mahsa Honary, Beth Bell, Sarah Clinch, Julio Vega, Leo Kroll, Aaron Sefi, and Roisin McNaney. 2020. Shap- ing the design of smartphone-based interventions for self-harm. In Proceedings of the 2020 CHI Confer- ence on Human Factors in Computing Systems, CHI '20, page 1-14, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "HuggingFace. 2022. roberta-base", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HuggingFace. 2022. roberta-base.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Hide: Fine-grained permissions in android applications", "authors": [ { "first": "Jinseong", "middle": [], "last": "Jeon", "suffix": "" }, { "first": "Kristopher", "middle": [ "K" ], "last": "Micinski", "suffix": "" }, { "first": "Jeffrey", "middle": [ "A" ], "last": "Vaughan", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Fogel", "suffix": "" }, { "first": "Nikhilesh", "middle": [], "last": "Reddy", "suffix": "" }, { "first": "Jeffrey", "middle": [ "S" ], "last": "Foster", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Millstein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Second ACM Workshop on Security and Privacy in Smartphones and Mobile Devices -SPSM '12", "volume": "3", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/2381934.2381938" ] }, "num": null, "urls": [], "raw_text": "Jinseong Jeon, Kristopher K. Micinski, Jeffrey A. Vaughan, Ari Fogel, Nikhilesh Reddy, Jeffrey S. Fos- ter, and Todd Millstein. 2012. Dr. Android and Mr. Hide: Fine-grained permissions in android applica- tions. In Proceedings of the Second ACM Workshop on Security and Privacy in Smartphones and Mobile Devices -SPSM '12, page 3, Raleigh, North Carolina, USA. ACM Press.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Learning and using the cognitive walkthrough method: A case study approach", "authors": [ { "first": "Bonnie", "middle": [ "E" ], "last": "John", "suffix": "" }, { "first": "Hilary", "middle": [], "last": "Packer", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '95", "volume": "", "issue": "", "pages": "429--436", "other_ids": { "DOI": [ "10.1145/223904.223962" ] }, "num": null, "urls": [], "raw_text": "Bonnie E. John and Hilary Packer. 1995. Learning and using the cognitive walkthrough method: A case study approach. In Proceedings of the SIGCHI Con- ference on Human Factors in Computing Systems, CHI '95, page 429-436, USA. ACM Press/Addison- Wesley Publishing Co.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Nugu: A group-based intervention app for improving self-regulation of limiting smartphone use", "authors": [ { "first": "Minsam", "middle": [], "last": "Ko", "suffix": "" }, { "first": "Subin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Joonwon", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Heizmann", "suffix": "" }, { "first": "Jinyoung", "middle": [], "last": "Jeong", "suffix": "" }, { "first": "Uichin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Daehee", "middle": [], "last": "Shin", "suffix": "" }, { "first": "Koji", "middle": [], "last": "Yatani", "suffix": "" }, { "first": "Junehwa", "middle": [], "last": "Song", "suffix": "" }, { "first": "Kyong-Mee", "middle": [], "last": "Chung", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, CSCW '15", "volume": "", "issue": "", "pages": "1235--1245", "other_ids": { "DOI": [ "10.1145/2675133.2675244" ] }, "num": null, "urls": [], "raw_text": "Minsam Ko, Subin Yang, Joonwon Lee, Christian Heiz- mann, Jinyoung Jeong, Uichin Lee, Daehee Shin, Koji Yatani, Junehwa Song, and Kyong-Mee Chung. 2015. Nugu: A group-based intervention app for improving self-regulation of limiting smartphone use. In Proceedings of the 18th ACM Conference on Com- puter Supported Cooperative Work & Social Com- puting, CSCW '15, page 1235-1245, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Siamese neural networks for one-shot image recognition", "authors": [ { "first": "Gregory", "middle": [], "last": "Koch", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gregory Koch, Richard Zemel, and Ruslan Salakhut- dinov. 2015. Siamese neural networks for one-shot image recognition.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "2021. I want my app that way: Reclaiming sovereignty over personal devices", "authors": [ { "first": "Konrad", "middle": [], "last": "Kollnig", "suffix": "" }, { "first": "Siddhartha", "middle": [], "last": "Datta", "suffix": "" }, { "first": "Max", "middle": [], "last": "Van Kleek", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems Late-Breaking Works", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Konrad Kollnig, Siddhartha Datta, and Max Van Kleek. 2021. I want my app that way: Reclaiming sovereignty over personal devices. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems Late-Breaking Works, Yokohama, Japan. ACM Press.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Auto logout", "authors": [], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "AV Tech Labs. 2019. Auto logout.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The sams: Smartphone addiction management system and verification", "authors": [ { "first": "Heyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Heejune", "middle": [], "last": "Ahn", "suffix": "" }, { "first": "Samwook", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Wanbok", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2014, "venue": "J. Med. Syst", "volume": "38", "issue": "1", "pages": "1--10", "other_ids": { "DOI": [ "10.1007/s10916-013-0001-1" ] }, "num": null, "urls": [], "raw_text": "Heyoung Lee, Heejune Ahn, Samwook Choi, and Wan- bok Choi. 2014. The sams: Smartphone addiction management system and verification. J. Med. Syst., 38(1):1-10.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Code 2.0, 1 edition", "authors": [ { "first": "Lawence", "middle": [], "last": "Lessig", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lawence Lessig. Code 2.0, 1 edition. Basic Books.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Appdetox: Helping users with mobile app addiction", "authors": [ { "first": "Markus", "middle": [], "last": "L\u00f6chtefeld", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "B\u00f6hmer", "suffix": "" }, { "first": "Lyubomir", "middle": [], "last": "Ganev", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 12th International Conference on Mobile and Ubiquitous Multimedia, MUM '13", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/2541831.2541870" ] }, "num": null, "urls": [], "raw_text": "Markus L\u00f6chtefeld, Matthias B\u00f6hmer, and Lyubomir Ganev. 2013. Appdetox: Helping users with mobile app addiction. In Proceedings of the 12th Interna- tional Conference on Mobile and Ubiquitous Multi- media, MUM '13, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Mojtaba Shahin, and Jing Zhan. 2021. Characteristics and challenges of low-code development: The practitioners' perspective", "authors": [ { "first": "Yajing", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Liang", "suffix": "" }, { "first": "Chong", "middle": [], "last": "Wang", "suffix": "" } ], "year": null, "venue": "Proceedings of the 15th ACM / IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yajing Luo, Peng Liang, Chong Wang, Mojtaba Shahin, and Jing Zhan. 2021. Characteristics and challenges of low-code development: The practitioners' perspec- tive. Proceedings of the 15th ACM / IEEE Interna- tional Symposium on Empirical Software Engineer- ing and Measurement (ESEM).", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Max Van Kleek, and Nigel Shadbolt. 2020a. 'I Just Want to Hack Myself to Not Get Distracted': Evaluating Design Interventions for Self-Control on Facebook", "authors": [ { "first": "Ulrik", "middle": [], "last": "Lyngs", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Lukoff", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Slovak", "suffix": "" }, { "first": "William", "middle": [], "last": "Seymour", "suffix": "" }, { "first": "Helena", "middle": [], "last": "Webb", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Jirotka", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--15", "other_ids": { "DOI": [ "10.1145/3313831.3376672" ] }, "num": null, "urls": [], "raw_text": "Ulrik Lyngs, Kai Lukoff, Petr Slovak, William Sey- mour, Helena Webb, Marina Jirotka, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2020a. 'I Just Want to Hack Myself to Not Get Distracted': Evaluating Design Interventions for Self-Control on Facebook, page 1-15. Association for Computing Machinery, New York, NY, USA.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Max Van Kleek, and Nigel Shadbolt. 2020b. 'I Just Want to Hack Myself to Not Get Distracted': Evaluating Design Interventions for Self-Control on Facebook", "authors": [ { "first": "Ulrik", "middle": [], "last": "Lyngs", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Lukoff", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Slovak", "suffix": "" }, { "first": "William", "middle": [], "last": "Seymour", "suffix": "" }, { "first": "Helena", "middle": [], "last": "Webb", "suffix": "" }, { "first": "Marina", "middle": [], "last": "Jirotka", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhao", "suffix": "" } ], "year": null, "venue": "Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems", "volume": "", "issue": "", "pages": "1--15", "other_ids": { "DOI": [ "10.1145/3313831.3376672" ] }, "num": null, "urls": [], "raw_text": "Ulrik Lyngs, Kai Lukoff, Petr Slovak, William Sey- mour, Helena Webb, Marina Jirotka, Jun Zhao, Max Van Kleek, and Nigel Shadbolt. 2020b. 'I Just Want to Hack Myself to Not Get Distracted': Evaluating Design Interventions for Self-Control on Facebook. In Proceedings of the 2020 CHI Conference on Hu- man Factors in Computing Systems, pages 1-15, Hon- olulu HI USA. ACM.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "The history of software testing", "authors": [ { "first": "Joris", "middle": [], "last": "Meerts", "suffix": "" }, { "first": "Dorothy", "middle": [], "last": "Graham", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joris Meerts and Dorothy Graham. 2010. The history of software testing.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Content restrictions based on local law", "authors": [ { "first": "", "middle": [], "last": "Meta", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Meta. 2022. Content restrictions based on local law.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "What is being transferred in transfer learning?", "authors": [ { "first": "Hanie", "middle": [], "last": "Behnam Neyshabur", "suffix": "" }, { "first": "Chiyuan", "middle": [], "last": "Sedghi", "suffix": "" }, { "first": "", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Advances in Neural Information Processing Systems", "volume": "33", "issue": "", "pages": "512--523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Behnam Neyshabur, Hanie Sedghi, and Chiyuan Zhang. 2020. What is being transferred in transfer learning? In Advances in Neural Information Processing Sys- tems, volume 33, pages 512-523. Curran Associates, Inc.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Good vibrations: Can a digital nudge reduce digital overload", "authors": [ { "first": "Fabian", "middle": [], "last": "Okeke", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Sobolev", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Dell", "suffix": "" }, { "first": "Deborah", "middle": [], "last": "Estrin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, Mo-bileHCI '18", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3229434.3229463" ] }, "num": null, "urls": [], "raw_text": "Fabian Okeke, Michael Sobolev, Nicola Dell, and Deb- orah Estrin. 2018. Good vibrations: Can a digital nudge reduce digital overload? In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services, Mo- bileHCI '18, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "A model for types and levels of human interaction with automation", "authors": [ { "first": "R", "middle": [], "last": "Parasuraman", "suffix": "" }, { "first": "T", "middle": [ "B" ], "last": "Sheridan", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Wickens", "suffix": "" } ], "year": 2000, "venue": "IEEE Transactions on Systems, Man, and Cybernetics -Part A: Systems and Humans", "volume": "30", "issue": "3", "pages": "286--297", "other_ids": { "DOI": [ "10.1109/3468.844354" ] }, "num": null, "urls": [], "raw_text": "R. Parasuraman, T.B. Sheridan, and C.D. Wickens. 2000. A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics -Part A: Systems and Humans, 30(3):286-297.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Defining digital self-harm", "authors": [ { "first": "Jessica", "middle": [], "last": "Pater", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Mynatt", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW '17", "volume": "", "issue": "", "pages": "1501--1513", "other_ids": { "DOI": [ "10.1145/2998181.2998224" ] }, "num": null, "urls": [], "raw_text": "Jessica Pater and Elizabeth Mynatt. 2017. Defining dig- ital self-harm. In Proceedings of the 2017 ACM Con- ference on Computer Supported Cooperative Work and Social Computing, CSCW '17, page 1501-1513, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Exploring indicators of digital self-harm with eating disorder patients: A case study", "authors": [ { "first": "Jessica", "middle": [ "A" ], "last": "Pater", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Farrington", "suffix": "" }, { "first": "Alycia", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Lauren", "middle": [ "E" ], "last": "Reining", "suffix": "" }, { "first": "Tammy", "middle": [], "last": "Toscos", "suffix": "" }, { "first": "Elizabeth", "middle": [ "D" ], "last": "Mynatt", "suffix": "" } ], "year": 2019, "venue": "Proc. ACM Hum.-Comput. Interact", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1145/3359186" ] }, "num": null, "urls": [], "raw_text": "Jessica A. Pater, Brooke Farrington, Alycia Brown, Lau- ren E. Reining, Tammy Toscos, and Elizabeth D. My- natt. 2019. Exploring indicators of digital self-harm with eating disorder patients: A case study. Proc. ACM Hum.-Comput. Interact., 3(CSCW).", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Samy Bengio, and Oriol Vinyals. 2020. Rapid learning or feature reuse? towards understanding the effectiveness of maml", "authors": [ { "first": "Aniruddh", "middle": [], "last": "Raghu", "suffix": "" }, { "first": "Maithra", "middle": [], "last": "Raghu", "suffix": "" } ], "year": null, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. 2020. Rapid learning or feature reuse? towards understanding the effectiveness of maml. In International Conference on Learning Representa- tions.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "DroidForce: Enforcing Complex, Data-centric, System-wide Policies in Android", "authors": [ { "first": "Siegfried", "middle": [], "last": "Rasthofer", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Arzt", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Lovat", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Bodden", "suffix": "" } ], "year": 2014, "venue": "Ninth International Conference on Availability, Reliability and Security", "volume": "", "issue": "", "pages": "40--49", "other_ids": { "DOI": [ "10.1109/ARES.2014.13" ] }, "num": null, "urls": [], "raw_text": "Siegfried Rasthofer, Steven Arzt, Enrico Lovat, and Eric Bodden. 2014. DroidForce: Enforcing Com- plex, Data-centric, System-wide Policies in Android. In 2014 Ninth International Conference on Availabil- ity, Reliability and Security, pages 40-49, Fribourg, Switzerland. IEEE.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Screenomics: A framework to capture and analyze personal life experiences and the ways that technology shapes them", "authors": [ { "first": "Byron", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "Nilam", "middle": [], "last": "Ram", "suffix": "" }, { "first": "Thomas", "middle": [ "N" ], "last": "Robinson", "suffix": "" }, { "first": "James", "middle": [ "J" ], "last": "Cummings", "suffix": "" }, { "first": "C", "middle": [ "Lee" ], "last": "Giles", "suffix": "" }, { "first": "Jennifer", "middle": [], "last": "Pan", "suffix": "" }, { "first": "Agnese", "middle": [], "last": "Chiatti", "suffix": "" }, { "first": "Mj", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Katie", "middle": [], "last": "Roehrick", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Anupriya", "middle": [], "last": "Gagneja", "suffix": "" }, { "first": "Miriam", "middle": [], "last": "Brinberg", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Muise", "suffix": "" }, { "first": "Yingdan", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Mufan", "middle": [], "last": "Luo", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Fitzgerald", "suffix": "" }, { "first": "Leo", "middle": [], "last": "Yeykelis", "suffix": "" } ], "year": 2021, "venue": "", "volume": "36", "issue": "", "pages": "150--201", "other_ids": { "DOI": [ "10.1080/07370024.2019.1578652" ], "PMID": [ "33867652" ] }, "num": null, "urls": [], "raw_text": "Byron Reeves, Nilam Ram, Thomas N. Robinson, James J. Cummings, C. Lee Giles, Jennifer Pan, Ag- nese Chiatti, Mj Cho, Katie Roehrick, Xiao Yang, Anupriya Gagneja, Miriam Brinberg, Daniel Muise, Yingdan Lu, Mufan Luo, Andrew Fitzgerald, and Leo Yeykelis. 2021. Screenomics: A framework to capture and analyze personal life experiences and the ways that technology shapes them. Hu- man-Computer Interaction, 36(2):150-201. PMID: 33867652.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "Funding Information: The US National Institutes of Health (NIH) is Publisher Copyright: \u00a9 2020", "authors": [ { "first": "Byron", "middle": [], "last": "Reeves", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Robinson", "suffix": "" }, { "first": "Nilam", "middle": [], "last": "Ram", "suffix": "" } ], "year": 2020, "venue": "Nature", "volume": "577", "issue": "7790", "pages": "314--317", "other_ids": { "DOI": [ "10.1038/d41586-020-00032-5" ] }, "num": null, "urls": [], "raw_text": "Byron Reeves, Thomas Robinson, and Nilam Ram. 2020. Time for the human screenome project. Na- ture, 577(7790):314-317. Funding Information: The US National Institutes of Health (NIH) is Publisher Copyright: \u00a9 2020, Nature.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "Usability evaluation with the cognitive walkthrough", "authors": [ { "first": "John", "middle": [], "last": "Rieman", "suffix": "" }, { "first": "Marita", "middle": [], "last": "Franzke", "suffix": "" }, { "first": "David", "middle": [], "last": "Redmiles", "suffix": "" } ], "year": 1995, "venue": "Conference Companion on Human Factors in Computing Systems, CHI '95", "volume": "", "issue": "", "pages": "387--388", "other_ids": { "DOI": [ "10.1145/223355.223735" ] }, "num": null, "urls": [], "raw_text": "John Rieman, Marita Franzke, and David Redmiles. 1995. Usability evaluation with the cognitive walk- through. In Conference Companion on Human Fac- tors in Computing Systems, CHI '95, page 387-388, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Xposed framework", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "rovo89. 2020. Xposed framework.", "links": null }, "BIBREF60": { "ref_id": "b60", "title": "Adaptive subspaces for few-shot learning", "authors": [ { "first": "Christian", "middle": [], "last": "Simon", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Koniusz", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Nock", "suffix": "" }, { "first": "Mehrtash", "middle": [], "last": "Harandi", "suffix": "" } ], "year": 2020, "venue": "2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", "volume": "", "issue": "", "pages": "4135--4144", "other_ids": { "DOI": [ "10.1109/CVPR42600.2020.00419" ] }, "num": null, "urls": [], "raw_text": "Christian Simon, Piotr Koniusz, Richard Nock, and Mehrtash Harandi. 2020. Adaptive subspaces for few-shot learning. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4135-4144.", "links": null }, "BIBREF61": { "ref_id": "b61", "title": "Unhook -remove youtube recommended videos", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Unhook. 2022. Unhook -remove youtube recom- mended videos.", "links": null }, "BIBREF62": { "ref_id": "b62", "title": "Learning from the worst: Dynamically generated datasets to improve online hate detection", "authors": [ { "first": "Bertie", "middle": [], "last": "Vidgen", "suffix": "" }, { "first": "Tristan", "middle": [], "last": "Thrush", "suffix": "" }, { "first": "Zeerak", "middle": [], "last": "Waseem", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1667--1682", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.132" ] }, "num": null, "urls": [], "raw_text": "Bertie Vidgen, Tristan Thrush, Zeerak Waseem, and Douwe Kiela. 2021. Learning from the worst: Dy- namically generated datasets to improve online hate detection. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1667-1682, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF63": { "ref_id": "b63", "title": "Virtual xposed", "authors": [ { "first": "", "middle": [], "last": "Vrtualapp", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "VrtualApp. 2016. Virtual xposed.", "links": null }, "BIBREF64": { "ref_id": "b64", "title": "Trick me if you can: Human-in-the-loop generation of adversarial examples for question answering", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Rodriguez", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Ikuya", "middle": [], "last": "Yamada", "suffix": "" }, { "first": "Jordan", "middle": [], "last": "Boyd-Graber", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "387--401", "other_ids": { "DOI": [ "10.1162/tacl_a_00279" ] }, "num": null, "urls": [], "raw_text": "Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Ya- mada, and Jordan Boyd-Graber. 2019. Trick me if you can: Human-in-the-loop generation of adversar- ial examples for question answering. Transactions of the Association for Computational Linguistics, 7:387- 401.", "links": null }, "BIBREF65": { "ref_id": "b65", "title": "Understanding and discovering deliberate selfharm content in social media", "authors": [ { "first": "Yilin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Jiliang", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jundong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Baoxin", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yali", "middle": [], "last": "Wan", "suffix": "" }, { "first": "Clayton", "middle": [], "last": "Mellina", "suffix": "" }, { "first": "O'", "middle": [], "last": "Neil", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Hare", "suffix": "" }, { "first": "", "middle": [], "last": "Chang", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 26th International Conference on World Wide Web, WWW '17", "volume": "", "issue": "", "pages": "93--102", "other_ids": { "DOI": [ "10.1145/3038912.3052555" ] }, "num": null, "urls": [], "raw_text": "Yilin Wang, Jiliang Tang, Jundong Li, Baoxin Li, Yali Wan, Clayton Mellina, Neil O'Hare, and Yi Chang. 2017. Understanding and discovering deliberate self- harm content in social media. In Proceedings of the 26th International Conference on World Wide Web, WWW '17, page 93-102, Republic and Canton of Geneva, CHE. International World Wide Web Con- ferences Steering Committee.", "links": null }, "BIBREF66": { "ref_id": "b66", "title": "News feed eradicator for facebook. Foundation Wikimedia", "authors": [ { "first": "Jordan", "middle": [], "last": "West", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jordan West. 2012. News feed eradicator for facebook. Foundation Wikimedia. Wikimedia downloads.", "links": null }, "BIBREF67": { "ref_id": "b67", "title": "Tianlong Ma, and Liang He. 2021. A survey of human-in-the-loop for machine learning", "authors": [ { "first": "Xingjiao", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Luwei", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yixuan", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Junhang", "middle": [], "last": "Zhang", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.48550/ARXIV.2108.00941" ] }, "num": null, "urls": [], "raw_text": "Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. 2021. A survey of human-in-the-loop for machine learning.", "links": null }, "BIBREF68": { "ref_id": "b68", "title": "Aurasium: Practical policy enforcement for android applications", "authors": [ { "first": "Rubin", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Hassen", "middle": [], "last": "Sa\u00efdi", "suffix": "" }, { "first": "Ross", "middle": [], "last": "Anderson", "suffix": "" } ], "year": 2012, "venue": "21st USENIX Security Symposium (USENIX Security 12)", "volume": "", "issue": "", "pages": "539--552", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rubin Xu, Hassen Sa\u00efdi, and Ross Anderson. 2012. Aurasium: Practical policy enforcement for android applications. In 21st USENIX Security Symposium (USENIX Security 12), pages 539-552, Bellevue, WA. USENIX Association.", "links": null }, "BIBREF69": { "ref_id": "b69", "title": "How to invest my time: Lessons from human-in-the-loop entity extraction", "authors": [ { "first": "Shanshan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Lihong", "middle": [], "last": "He", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Dragut", "suffix": "" }, { "first": "Slobodan", "middle": [], "last": "Vucetic", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19", "volume": "", "issue": "", "pages": "2305--2313", "other_ids": { "DOI": [ "10.1145/3292500.3330773" ] }, "num": null, "urls": [], "raw_text": "Shanshan Zhang, Lihong He, Eduard Dragut, and Slobo- dan Vucetic. 2019. How to invest my time: Lessons from human-in-the-loop entity extraction. In Pro- ceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19, page 2305-2313, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF70": { "ref_id": "b70", "title": "East: An efficient and accurate scene text detector", "authors": [ { "first": "Xinyu", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Cong", "middle": [], "last": "Yao", "suffix": "" }, { "first": "He", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Yuzhi", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shuchang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Weiran", "middle": [], "last": "He", "suffix": "" }, { "first": "Jiajun", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xinyu Zhou, Cong Yao, He Wen, Yuzhi Wang, Shuchang Zhou, Weiran He, and Jiajun Liang. 2017. East: An efficient and accurate scene text detector.", "links": null }, "BIBREF71": { "ref_id": "b71", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "authors": [ { "first": "Yukun", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Kiros", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Zemel", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Raquel", "middle": [], "last": "Urtasun", "suffix": "" }, { "first": "Antonio", "middle": [], "last": "Torralba", "suffix": "" }, { "first": "Sanja", "middle": [], "last": "Fidler", "suffix": "" } ], "year": 2015, "venue": "The IEEE International Conference on Computer Vision (ICCV)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In The IEEE International Con- ference on Computer Vision (ICCV).", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Walkthrough of using GreaseVision-modified interfaces.(a) User authentication: Secure gateway to the user's screenomes, personal devices, and intervention development suite. (b) Interface & interventions selection: Listings of all registered devices/emulators on server, as well as interventions contributed by the users or community using the tool in Figure 3. (c) Interface access: Accessing a Linux desktop from another (Linux)", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Removal of GUI elements (YouTube sharing metrics/buttons) across multiple target interfaces and operating systems.(a) Element removal on emulated desktop (MacOS) (b) Element removal on emulated Android (c) Element removal on emulated iOS Mask Min. masks Android app iOS app Mobile browser Desktop browser Stories bar -Twitter 1", "type_str": "figure", "num": null, "uris": null }, "FIGREF3": { "text": "Architecture of GreaseTerminator (left) and GreaseVision (right).", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "content": "", "num": null, "text": "\u2713 if element removal is successful, \u2717 if element removal is unsuccessful, -if the element not available on an interface.", "html": null, "type_str": "table" }, "TABREF2": { "content": "
Screen Underlay input frames + Selected Visual Interventions Screen Overlay output frames (a) Interface User Database Text Hook Mask Hook Model Hook User Virtual Machines & Containers Raw Screen Images Updated Screen Images access render overlay generate access activate interventions Framet Framet+1 Framet+2 \u2026 Framet Framet+1 Framet+2 input \u2026 commandsannotateScreenome Visualization Masks, Models \u00e0 (Personal) Interventions Network Interventions populate screenome
", "num": null, "text": "The high-level architecture of GreaseTerminator. Details are explained in Section 2.3 and 4.2.", "html": null, "type_str": "table" } } } }