carolanderson commited on
Commit
4fa92be
1 Parent(s): 0362c93

add github workflows to push to hf

Browse files
.github/workflows/check_size.yml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Check file size
2
+ on: [push]
3
+
4
+ # to run this workflow manually from the Actions tab
5
+ workflow_dispatch:
6
+
7
+ jobs:
8
+ sync-to-hub:
9
+ runs-on: ubuntu-latest
10
+ steps:
11
+ - name: Check large files
12
+ uses: ActionsDesk/lfs-warning@v2.0
13
+ with:
14
+ filesizelimit: 10485760 # this is 10MB so we can sync to HF Spaces
.github/workflows/push-to-hf.yml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face hub
2
+ on:
3
+ push:
4
+ branches: [docker-hf]
5
+
6
+ # to run this workflow manually from the Actions tab
7
+ workflow_dispatch:
8
+
9
+ jobs:
10
+ sync-to-hub:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - uses: actions/checkout@v3
14
+ with:
15
+ fetch-depth: 0
16
+ lfs: true
17
+ - name: Push to hub
18
+ env:
19
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
20
+ run: git push https://carolanderson:$HF_TOKEN@huggingface.co/spaces/carolanderson/indielabel docker-hf:main
README.md CHANGED
@@ -1,59 +1,11 @@
1
- # IndieLabel
2
- **End-User Audits: A System Empowering Communities to Lead Large-Scale Investigations of Harmful Algorithmic Behavior**
3
-
4
- Michelle S. Lam, Mitchell L. Gordon, Danaë Metaxa, Jeffrey T. Hancock, James A. Landay, Michael S. Bernstein (CSCW 2022)
5
-
6
- This repo shares our implementation of **IndieLabel**—an interactive web application for end-user auditing that we introduced in our CSCW paper.
7
-
8
- > Because algorithm audits are conducted by technical experts, audits are necessarily limited to the hypotheses that experts think to test. End users hold the promise to expand this purview, as they inhabit spaces and witness algorithmic impacts that auditors do not. In pursuit of this goal, we propose end-user audits—system-scale audits led by non-technical users—and present an approach that scaffolds end users in hypothesis generation, evidence identification, and results communication. Today, performing a system-scale audit requires substantial user effort to label thousands of system outputs, so we introduce a collaborative filtering technique that leverages the algorithmic system's own disaggregated training data to project from a small number of end user labels onto the full test set. Our end-user auditing tool, IndieLabel, employs these projected labels so that users can rapidly explore where their opinions diverge from the algorithmic system's outputs. By highlighting topic areas where the system is under-performing for the user and surfacing sets of likely error cases, the tool guides the user in authoring an audit report. In an evaluation of end-user audits on a popular comment toxicity model with 17 non-technical participants, participants both replicated issues that formal audits had previously identified and also raised previously underreported issues such as under-flagging on veiled forms of hate that perpetuate stigma and over-flagging of slurs that have been reclaimed by marginalized communities.
9
-
10
  ---
11
 
12
- ## Installation / Setup
13
- - Activate your virtual environment (tested with Python 3.8).
14
- - Install requirements:
15
- ```
16
- $ pip install -r requirements.txt
17
- ```
18
- - Download and unzip the `data` sub-directory from [this Drive folder](https://drive.google.com/file/d/1In9qAzV5t--rMmEH2R5miWpZ4IQStgFu/view?usp=sharing) and place it in the repo directory (334.2MB zipped, 549.1MB unzipped).
19
-
20
-
21
- - Start the Flask server:
22
- ```
23
- $ python server.py
24
- ```
25
-
26
- - Concurrently build and run the Svelte app in another terminal session:
27
- ```
28
- $ cd indie_label_svelte/
29
- $ HOST=0.0.0.0 PORT=5000 npm run dev autobuild
30
- ```
31
-
32
- - You can now visit `localhost:5001` to view the IndieLabel app!
33
-
34
- ## Main paths
35
- Here's a summary of the relevant pages used for each participant in our study. For easier setup and navigation, we added URL parameters for the different labeling and auditing modes used in the study.
36
- - Participant's page: `localhost:5001/?user=<USER_NAME>`
37
- - Labeling task pages:
38
- - Group-based model (group selection): `localhost:5001/?user=<USER_NAME>&tab=labeling&label_mode=3`
39
- - End-user model (data labeling): `localhost:5001/?user=<USER_NAME>&tab=labeling&label_mode=0`
40
- - Tutorial page: `localhost:5001/?user=DemoUser&scaffold=tutorial `
41
- - Auditing task pages:
42
- - Fixed audit, end-user model: `localhost:5001/?user=<USER_NAME>&scaffold=personal`
43
- - Fixed audit, group-based model: `localhost:5001/?user=<USER_NAME>&scaffold=personal_group`
44
- - Free-form audit, end-user model: `localhost:5001/?user=<USER_NAME>&scaffold=prompts`
45
-
46
- ## Setting up a new model
47
- - Set up your username and navigate to the **Labeling** page
48
- - Option A: Using a direct URL parameter
49
- - Go to `localhost:5001/?user=<USER_NAME>&tab=labeling&label_mode=0`, where in place of `<USER_NAME>`, you've entered your desired username
50
- - Option B: Using the UI
51
- - Go to the Labeling page and ensure that the "Create a new model" mode is selected.
52
- - Select the User button on the top menu and enter your desired username.
53
-
54
- - Label all of the examples in the table
55
- - When you're done, click the "Get Number of Comments Labeled" button to verify the number of comments that have been labeled. If there are at least 40 comments labeled, the "Train Model" button will be enabled.
56
- - Click on the "Train Model" button and wait for the model to train (~30-60 seconds).
57
-
58
- - Then, go to the **Auditing** page and use your new model.
59
- - To view the different auditing modes that we provided for our evaluation task, please refer to the URL paths listed in the "Auditing task pages" section above.
 
1
+ ---
2
+ title: Indielabel
3
+ emoji: 🏢
4
+ colorFrom: yellow
5
+ colorTo: yellow
6
+ sdk: docker
7
+ pinned: false
8
+ app_port: 5001
 
9
  ---
10
 
11
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference