emilylearning commited on
Commit
d6ad933
·
1 Parent(s): 312cc2b

Adding some conclusions to text

Browse files
Files changed (1) hide show
  1. app.py +7 -1
app.py CHANGED
@@ -238,7 +238,13 @@ In the demo below you can select among 4 different fine-tuning methods:
238
 
239
  And two different weighting schemes that were used in the loss function to nudge more toward the minority class in the dataset:
240
  - female pronouns.
241
-
 
 
 
 
 
 
242
  """
243
 
244
 
 
238
 
239
  And two different weighting schemes that were used in the loss function to nudge more toward the minority class in the dataset:
240
  - female pronouns.
241
+
242
+
243
+ One trend that appears is: conditioning on `birth_date` metadata in both training and inference text has the largest dose-response relationship. This seems reasonable, as the fine-tuned model is able to ‘stratify’ a learned relationship between gender pronouns and dates, when both are present in the text.
244
+
245
+ While conditioning on either no metadata or `birth_place` data training, have similar middle-ground effects for this inference task.
246
+
247
+ Finally, conditioning on `name` metadata in training, (while again conditioning on `date` in inference) has almost no dose-response relationship. It appears the learning of a `name —> gender pronouns` relationship was sufficiently successful to overwhelm any potential more nuanced learning, such as that driven by `birth_date` or `place`.
248
  """
249
 
250