Cyrile commited on
Commit
444be69
1 Parent(s): 965ffe7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -57,7 +57,7 @@ And now the hypothesis in French and the premise in English (cross-language cont
57
  # Zero-shot Classification
58
  The primary interest of training such models lies in their zero-shot classification performance. This means that the model is able to classify any text with any label
59
  without a specific training. What sets the Bloomz-560m-NLI LLMs apart in this domain is their ability to model and extract information from significantly more complex
60
- and lengthy test structures compared to models like BERT, RoBERTa, or CamemBERT.
61
 
62
  The zero-shot classification task can be summarized by:
63
  $$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$
 
57
  # Zero-shot Classification
58
  The primary interest of training such models lies in their zero-shot classification performance. This means that the model is able to classify any text with any label
59
  without a specific training. What sets the Bloomz-560m-NLI LLMs apart in this domain is their ability to model and extract information from significantly more complex
60
+ and lengthy text structures compared to models like BERT, RoBERTa, or CamemBERT.
61
 
62
  The zero-shot classification task can be summarized by:
63
  $$P(hypothesis=i\in\mathcal{C}|premise)=\frac{e^{P(premise=entailment\vert hypothesis=i)}}{\sum_{j\in\mathcal{C}}e^{P(premise=entailment\vert hypothesis=j)}}$$