Text Generation
Transformers
PyTorch
Safetensors
Spanish
gptj
causal-lm
Inference Endpoints
File size: 9,477 Bytes
63d08dc
 
 
 
 
 
 
 
 
 
 
 
c70e489
0c8baa7
 
 
 
 
 
 
 
 
 
 
 
63d08dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cd92248
63d08dc
 
 
3aa441d
63d08dc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f3115ef
63d08dc
 
 
 
 
 
 
 
 
 
 
 
 
 
d99da4a
 
 
 
 
 
 
 
 
63d08dc
 
fb465ec
 
e2de721
 
 
afd54ca
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
language:
- es
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- bertin-project/mc4-es-sampled

---

- [✨Version v1✨](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1): August 25th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1-half)*, at step 1M)
- [Version v1beta3](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3): July 22nd, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta3-half)*, at step 850k)
- [Version v1beta2](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2): June 6th, 2022 (*[full](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2) and [half-precision weights](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta2-half)*, at step 616k)
- [Version v1beta1](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/v1beta1-half): April 28th, 2022 (*half-precision weights only*, at step 408k)
- <details><summary>All checkpoints</summary>

  - [Checkpoint 130k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/6c116e533a00db027bf0a2e0b5e06d3e0772e2d0).
  - [Checkpoint 275k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/20f424ebcc7c500d5328ed45a8c911a2a75583f1).
  - [Checkpoint 408k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/c51db24abee958efe83e52fddf6d19e5f065b818).
  - [Checkpoint 616k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/abafe00bfb03330e72a67ed9fc0958c7399f2181).
  - [Checkpoint 850k](https://huggingface.co/bertin-project/bertin-gpt-j-6B/tree/59d5064b65043f2ff2b2549e4a076854eec22b2e).

</details>

# BERTIN GPT-J-6B

<div align=center>
<img alt="BERTIN logo" src="https://huggingface.co/bertin-project/bertin-roberta-base-spanish/resolve/main/images/bertin.png" width="200px">
</div>

## Demo: https://huggingface.co/spaces/bertin-project/bertin-gpt-j-6B

## Model Description

BERTIN-GPT-J-6B is a Spanish finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.

<figure>

| Hyperparameter       | Value      |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\)     | 28&ast;    |
| \\(d_{model}\\)      | 4096       |
| \\(d_{ff}\\)         | 16384      |
| \\(n_{heads}\\)      | 16         |
| \\(d_{head}\\)       | 256        |
| \\(n_{ctx}\\)        | 2048       |
| \\(n_{vocab}\\)      | 50257/50400&dagger; (same tokenizer as GPT-2/3)  |
| Positional Encoding  | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions      | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>&ast;</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>&dagger;</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>

The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.

## Training data

BERTIN-GPT-J-6B was finetuned on [mC4-es-sampled (gaussian)](https://huggingface.co/datasets/bertin-project/mc4-es-sampled), a Spanish subset of mC4 sampled using perplexity values.

## Training procedure

This model was finetuned for ~65 billion tokens (65,536,000,000) over 1,000,000 steps on a single TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly. Training took roughly 6 months.

## Intended Use and Limitations

BERTIN-GPT-J-6B learns an inner representation of the Spanish language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.

### How to use

This model can be easily loaded using the `AutoModelForCausalLM` functionality:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("bertin-project/bertin-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("bertin-project/bertin-gpt-j-6B")
```

### Limitations and Biases

As the original GPT-J model, the core functionality of BERTIN-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting BERTIN-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon BERTIN-GPT-J-6B to produce factually accurate output.

The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending, although some preliminary remarks are given in the [BERTIN paper](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/download/6403/3818).

As with all language models, it is hard to predict in advance how BERTIN-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.

## Evaluation results

We still have to find proper datasets to evaluate the model, so help is welcome!

## Citation and Related Information

### BibTeX entry

To cite this model:
```bibtex
@article{BERTIN,
	author = {Javier De la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury},
	title = {{BERTIN}: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling},
	journal = {Procesamiento del Lenguaje Natural},
	volume = {68},
	number = {0},
	year = {2022},
	keywords = {},
	abstract = {The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pretraining sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name perplexity sampling that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget.},
	issn = {1989-7553},
	url = {http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6403},
	pages = {13--23}
}
```

If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.

## Team

- Javier de la Rosa ([versae](https://huggingface.co/versae))
- Eduardo González ([edugp](https://huggingface.co/edugp))
- Paulo Villegas ([paulo](https://huggingface.co/paulo))
- Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps))
- Manu Romero ([mrm8488](https://huggingface.co/))
- María Grandury ([mariagrandury](https://huggingface.co/))

## Acknowledgements

This project would not have been possible without compute generously provided by the National Library of Norway and Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha. And specially, to [Stella Biderman](https://www.stellabiderman.com) for her general openness, and [Ben Wang](https://github.com/kingoflolz/mesh-transformer-jax) for the main codebase.

## Disclaimer

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models be liable for any results arising from the use made by third parties of these models.