Update README.md
Browse files
README.md
CHANGED
@@ -32,9 +32,7 @@ widget:
|
|
32 |
---
|
33 |
|
34 |
# Graphcore/gptj-mnli
|
35 |
-
|
36 |
This model is the fine-tuned version of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on the [GLUE MNLI dataset](https://huggingface.co/datasets/glue#mnli).
|
37 |
-
|
38 |
MNLI dataset consists of pairs of sentences, a *premise* and a *hypothesis*.
|
39 |
The task is to predict the relation between the premise and the hypothesis, which can be:
|
40 |
- `entailment`: hypothesis follows from the premise,
|
@@ -51,6 +49,34 @@ For example:
|
|
51 |
mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction <|endoftext|>
|
52 |
```
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
## Fine-tuning and validation data
|
55 |
Fine tuning is done using the `train` split of the GLUE MNLI dataset and the performance is measured using the [validation_mismatched](https://huggingface.co/datasets/glue#mnli_mismatched) split.
|
56 |
|
|
|
32 |
---
|
33 |
|
34 |
# Graphcore/gptj-mnli
|
|
|
35 |
This model is the fine-tuned version of [EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B) on the [GLUE MNLI dataset](https://huggingface.co/datasets/glue#mnli).
|
|
|
36 |
MNLI dataset consists of pairs of sentences, a *premise* and a *hypothesis*.
|
37 |
The task is to predict the relation between the premise and the hypothesis, which can be:
|
38 |
- `entailment`: hypothesis follows from the premise,
|
|
|
49 |
mnli hypothesis: Your contributions were of no help with our students' education. premise: Your contribution helped make it possible for us to provide our students with a quality education. target: contradiction <|endoftext|>
|
50 |
```
|
51 |
|
52 |
+
## Model description
|
53 |
+
|
54 |
+
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
|
55 |
+
|
56 |
+
<figure>
|
57 |
+
|
58 |
+
| Hyperparameter | Value |
|
59 |
+
|----------------------|------------|
|
60 |
+
| \\(n_{parameters}\\) | 6053381344 |
|
61 |
+
| \\(n_{layers}\\) | 28* |
|
62 |
+
| \\(d_{model}\\) | 4096 |
|
63 |
+
| \\(d_{ff}\\) | 16384 |
|
64 |
+
| \\(n_{heads}\\) | 16 |
|
65 |
+
| \\(d_{head}\\) | 256 |
|
66 |
+
| \\(n_{ctx}\\) | 2048 |
|
67 |
+
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
|
68 |
+
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
|
69 |
+
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
|
70 |
+
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
|
71 |
+
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
|
72 |
+
|
73 |
+
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
|
74 |
+
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
|
75 |
+
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
|
76 |
+
GPT-2/GPT-3.
|
77 |
+
|
78 |
+
[EleutherAI/gpt-j-6B](https://huggingface.co/EleutherAI/gpt-j-6B), our starting point for finetuning, is trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
|
79 |
+
|
80 |
## Fine-tuning and validation data
|
81 |
Fine tuning is done using the `train` split of the GLUE MNLI dataset and the performance is measured using the [validation_mismatched](https://huggingface.co/datasets/glue#mnli_mismatched) split.
|
82 |
|