anacondia-70m / README.md
UncleanCode's picture
Update README.md
fb3eefb
|
raw
history blame
1.34 kB
metadata
license: apache-2.0
datasets:
  - timdettmers/openassistant-guanaco
language:
  - en
pipeline_tag: text-generation

Anacondia

Anacondia-70m is a Pythia-70m-deduped model fine-tuned with QLoRA on timdettmers/openassistant-guanaco

Usage

Anacondia is not intended for any downstream usage and was trained for educational purposes. Please fine tune for downstream tasks or consider more serious models for inference if this doesn't fall into your usage aim.

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.4.0

Inference


#import necessary modules
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "UncleanCode/anacondia-70m"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

input= tokenizer("This is a sentence ",return_tensors="pt")
output= model.generate(**input)

tokenizer.decode(output[0])