pere commited on
Commit
167ade1
1 Parent(s): 09760b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -25,6 +25,8 @@ widget:
25
 
26
  # NB-Bert base model finetuned on Norwegian machine translated MNLI
27
 
 
 
28
  ## Description
29
  The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
30
  [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").
25
 
26
  # NB-Bert base model finetuned on Norwegian machine translated MNLI
27
 
28
+ ###NOTE: The demo on the right hand side is using the English template. The results are significantly worse than what the model is able to produce. Please use the Colab from the Git linked below to test the capabilities of the model.
29
+
30
  ## Description
31
  The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible.
32
  [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport").