Update README.md
Browse files
README.md
CHANGED
@@ -230,7 +230,7 @@ The untuned model CLS embedding also has strong linear probing performance (90%
|
|
230 |
|
231 |
This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder.
|
232 |
Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
|
233 |
-
The number of examples per task was capped to 64k. The model was trained for
|
234 |
|
235 |
|
236 |
tasksource training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing
|
|
|
230 |
|
231 |
This is the shared model with the MNLI classifier on top. Its encoder was trained on many datasets including bigbench, Anthropic rlhf, anli... alongside many NLI and classification tasks with a SequenceClassification heads while using only one shared encoder.
|
232 |
Each task had a specific CLS embedding, which is dropped 10% of the time to facilitate model use without it. All multiple-choice model used the same classification layers. For classification tasks, models shared weights if their labels matched.
|
233 |
+
The number of examples per task was capped to 64k. The model was trained for 30k steps with a batch size of 384, and a peak learning rate of 2e-5.
|
234 |
|
235 |
|
236 |
tasksource training code: https://colab.research.google.com/drive/1iB4Oxl9_B5W3ZDzXoWJN-olUbqLBxgQS?usp=sharing
|