YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

fine-tuned-gpt-neo - bnb 4bits

Original model description:

license: mit language: - en base_model: - EleutherAI/gpt-neo-1.3B library_name: transformers

Fine-tuned GPT-Neo Model

This is a fine-tuned version of GPT-Neo for specific tasks.

Model Details

  • Model Type: GPT-Neo
  • Fine-tuned for: [Specify tasks or datasets]

Usage

To use the model, run the following code:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Torrchy/fine-tuned-gpt-neo")
tokenizer = AutoTokenizer.from_pretrained("Torrchy/fine-tuned-gpt-neo")
Downloads last month
4
Safetensors
Model size
730M params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.