In this section, you’ll label some example comments to give a sense of your perspectives on what is toxic or not.
We’ll then train a simple model (which we’ll refer to as "your model") that estimates what your toxicity rating would be for the full dataset (with tens of thousands of comments) based on an existing dataset of toxicity ratings provided by different users.
Create a New Model
Comments to label
Comments with scores 0 and 1 will be allowed to remain on the platform.
Comments with scores 2, 3, or 4 will be deleted from the platform.
Given that some comments may lack context, if you're not sure, feel free to mark the unsure option to skip a comment.
{:else if label_mode == label_modes[1]}
Edit an Existing Model
{#key personalized_models}
{/key}
Comments to label
Comments with scores 0 and 1 will be allowed to remain on the platform.
Comments with scores 2, 3, or 4 will be deleted from the platform.
Given that some comments may lack context, if you're not sure, feel free to mark the unsure option to skip a comment.
{#key existing_model_name}
{/key}
{:else if label_mode == label_modes[2]}
Topic model training
In what topic area would you like to tune your model?
Comments to label
Comments with scores 0 and 1 will be allowed to remain on the platform.
Comments with scores 2, 3, or 4 will be deleted from the platform.
Given that some comments may lack context, if you're not sure, feel free to mark the unsure option to skip a comment.
{#key topic}
{/key}
{:else if label_mode == label_modes[3]}
Group model training
Please select just one of these five demographic axes (A, B, C, D, or E) to identify with to set up your group-based model:
Demographic axes
A: Political affiliation
B: Gender
C: Race (select all that apply)
D: LGBTQ+ Identity
E: Importance of religion
{#if group_size}
Number of labelers with matching traits: {group_size}
{/if}
{#await promise}
{:then group_model_res}
{#if group_model_res}
Model for your selected group memberships has been successfully tuned.