LuxGPT2 / README.md
laurabernardy's picture
Update README.md
93ea7d2
|
raw
history blame
2.36 kB
metadata
language:
  - lb
license: mit
tags:
  - luxembourgish
  - lëtzebuergesch
  - text generation
model-index:
  - name: LuxGPT2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: LuxembourgishTestDataset
          name: Luxembourgish Test Dataset
        metrics:
          - type: accuracy
            value: '0.33'
  - name: LuxGPT2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: LuxembourgishTestDataset
          name: Luxembourgish Test Dataset
        metrics:
          - type: perplexity
            value: '46.69'

LuxGPT-2

GPT-2 model for Text Generation in luxembourgish language, trained on 667 MB of text data, consisting of RTL.lu news articles, comments, parlament speeches, the luxembourgish Wikipedia, Newscrawl, Webcrawl and subtitles. The training took place on a 32 GB Nvidia Tesla V100

  • with an initial learning rate of 5e-5
  • with Batch size 4
  • for 109 hours
  • for 30 epochs
  • using the transformers library
    more detailed training information can be found in the "trainer_state.json".

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("laurabernardy/LuxGPT2")

model = AutoModelForCausalLM.from_pretrained("laurabernardy/LuxGPT2")

Limitations and Biases

See the GPT2 model card for considerations on limitations and bias. See the GPT2 documentation for details on GPT2.