YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

tags:

  • language-model

  • transformer-decoder

  • tiny-shakespeare license: mit datasets:

  • tiny_shakespeare model_description: | This is a small autoregressive language model based on the Transformer architecture trained on the Tiny Shakespeare dataset.

    Model Description

    The model is a custom implementation of a TransformerDecoderModel, which uses a decoder-only architecture similar to GPT-2. It was trained on the Tiny Shakespeare dataset to generate text in the style of William Shakespeare.

    Training Details

    The model was trained and tracked using Weights & Biases.

    How to Use

    To generate text with this model, you can load it and the tokenizer as follows:

    from transformers import AutoTokenizer
    from transformers import GPT2LMHeadModel
    
    # Load the model and tokenizer
    model = GPT2LMHeadModel.from_pretrained('NataliaH/TransformerDecoderModel')
    tokenizer = AutoTokenizer.from_pretrained('NataliaH/TransformerDecoderModel')
    
    # Provide input text and generate output
    input_text = 'To be or not to be'
    inputs = tokenizer(input_text, return_tensors='pt')
    outputs = model.generate(**inputs)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
    
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support