NorGLM commited on
Commit
8a9d894
1 Parent(s): 631ecbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -8,6 +8,27 @@ Gnerative Pretrained Tranformer with 369M parameters for Norwegian.
8
 
9
  It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models. The model is based on GPT2 architecture. NorGLM can be used for non-commercial purposes.
10
 
 
 
11
  All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English.
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  More training and evaluation details and papers will come soon!
 
8
 
9
  It belongs to NorGLM, a suite of pretrained Norwegian Generative Language Models. The model is based on GPT2 architecture. NorGLM can be used for non-commercial purposes.
10
 
11
+ ## Datasets
12
+
13
  All models in NorGLM are trained on 200G datasets, nearly 25B tokens, including Norwegian, Denish, Swedish, Germany and English.
14
 
15
+ ## Run the Model
16
+ ```python
17
+ import torch
18
+ from transformers import AutoTokenizer, AutoModelForCausalLM
19
+
20
+ model_id = "NorGLM/NorGPT-369M"
21
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
22
+ model = AutoModelForCausalLM.from_pretrained(
23
+ model_id,
24
+ device_map='auto',
25
+ torch_dtype=torch.bfloat16
26
+ )
27
+
28
+ text = "Tom ønsket å gå på barene med venner"
29
+ inputs = tokenizer(text, return_tensors="pt")
30
+ outputs = model.generate(**inputs, max_new_tokens=20)
31
+ ```
32
+
33
+ ## Note
34
  More training and evaluation details and papers will come soon!