wassemgtk commited on
Commit
2543d61
1 Parent(s): 9047354

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -20
README.md CHANGED
@@ -27,24 +27,7 @@ img {
27
 
28
  ## Model Description
29
 
30
- Palmyra-small 128M is a transformer-based language model. GPT refers to a class of transformer decoder-only models similar to GPT-2 and 3 while. It has Tensor Parallelism (TP) of 1, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU.
31
-
32
- <figure>
33
-
34
- | Hyperparameter | Value |
35
- |----------------------|------------|
36
- | \\(n_{parameters}\\) | 6053381344 |
37
- | \\(n_{layers}\\) | 28&ast; |
38
- | \\(d_{model}\\) | 4096 |
39
- | \\(d_{ff}\\) | 16384 |
40
- | \\(n_{heads}\\) | 16 |
41
- | \\(d_{head}\\) | 256 |
42
- | \\(n_{ctx}\\) | 2048 |
43
- | \\(n_{vocab}\\) | 50257/50400&dagger; (same tokenizer as GPT-2/3) |
44
- | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
45
- | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
46
- <figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p>
47
- <p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
48
 
49
  The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
50
  dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
@@ -143,5 +126,3 @@ To cite the codebase that trained this model:
143
  month = May
144
  }
145
  ```
146
-
147
- If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
 
27
 
28
  ## Model Description
29
 
30
+ Palmyra was primarily pretrained with English text, there is still a trace amount of non-English data present within the training corpus that was accessed through CommonCrawl. A causal language modeling (CLM) objective was utilized during the process of the model's pretraining. Similar to GPT-3, Palmyra is a member of the same family of models that only contain a decoder. As a result, it was pretrained utilizing the objective of self-supervised causal language modeling. Palmyra uses the prompts and general experimental setup from GPT-3 in order to conduct its evaluation in accordance with GPT-3. Read the official paper if you want more information about this.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
33
  dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
 
126
  month = May
127
  }
128
  ```