gpt2_guanaco-dolly-platypus is an instruction fine-tuned model based on the GPT-2 transformer architecture.
We use state-of-the-art Language Model Evaluation Harness to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
- Trained by: Luiz G A Alves
- Model type: gpt2_guanaco-dolly-platypus is an auto-regressive language model based on the GPT-2 transformer architecture.
- Language(s): English
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="lgaalves/gpt2_guanaco-dolly-platypus") question = "What is a large language model?" answer = pipe(question) print(answer['generated_text'])
or, you can load the model direclty using:
# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2_open-platypus") model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2_open-platypus")
lgaalves/gpt2_guanaco-dolly-platypus was trained using 3 datasets:
lgaalves/gpt2_guanaco-dolly-platypus was instruction fine-tuned using LoRA on 1 T4 GPU on Google Colab. It took about 1 hour to train it.
You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
Detailed results can be found here
- Downloads last month