nb-gpt-j-6B / README.md
versae's picture
Update README.md
ee008c7
|
raw
history blame
6.22 kB
metadata
language:
  - 'no'
  - nb
  - nn
tags:
  - pytorch
  - causal-lm
license: apache-2.0
datasets:
  - NbAiLab/NCC
  - mc4
  - oscar
  • Release 1.0beta (April 28th, 2022) Half-precision weights

NB-GPT-J-6B

Demo: https://ai.nb.no/demo/nb-gpt-j-6B/ (Be patient, it runs on CPU 😅)

Model Description

NB-GPT-J-6B is a Norwegian finetuned version of GPT-J 6B, a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters (6 billion parameters).

Hyperparameter Value
nparametersn_{parameters} 6053381344
nlayersn_{layers} 28*
dmodeld_{model} 4096
dffd_{ff} 16384
nheadsn_{heads} 16
dheadd_{head} 256
nctxn_{ctx} 2048
nvocabn_{vocab} 50257/50400† (same tokenizer as GPT-2/3)
Positional Encoding Rotary Position Embedding (RoPE)
RoPE Dimensions 64

* Each layer consists of one feedforward block and one self attention block.

Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.

The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3.

Training data

NB-GPT-J-6B was finetuned on NCC, the Norwegian Colossal Corpus, plus other Internet sources like Wikipedia, mC4, and OSCAR.

Training procedure

This model was finetuned for 17 billion tokens over 193,000 steps on a TPU v3-8 VM. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.

Intended Use and Limitations

NB-GPT-J-6B learns an inner representation of the Norwegian language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.

How to use

This model can be easily loaded using the AutoModelForCausalLM functionality:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("NbAiLab/nb-gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("NbAiLab/nb-gpt-j-6B")

Limitations and Biases

As the original GPT-J model, the core functionality of NB-GPT-J-6B is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting NB-GPT-J-6B it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon NB-GPT-J-6B to produce factually accurate output.

The original GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. A fine-grained analysis of the bias contained in the corpus used for fine-tuning is still pending.

As with all language models, it is hard to predict in advance how NB-GPT-J-6B will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.

Evaluation results

We still have to find proper datasets to evaluate the model, so help is welcome!

Citation and Related Information

BibTeX entry

To cite this model or the corpus used:

@inproceedings{kummervold2021operationalizing,
  title={Operationalizing a National Digital Library: The Case for a Norwegian Transformer Model},
  author={Kummervold, Per E and De la Rosa, Javier and Wetjen, Freddy and Brygfjeld, Svein Arne},
  booktitle={Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
  pages={20--29},
  year={2021},
  url={https://aclanthology.org/2021.nodalida-main.3/}
}

If you use this model, we would love to hear about it! Reach out on twitter, GitHub, Discord, or shoot us an email.

Disclaimer

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence. In no event shall the owner of the models (The National Library of Norway) be liable for any results arising from the use made by third parties of these models.

Acknowledgements

This project would not have been possible without compute generously provided by Google through the TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha. Specially, to Stella Biderman for her general openness, and Ben Wang for the main codebase.