Run the Model
tokenizer = LlamaTokenizer.from_pretrained("alexpaul/QI-large-v1")
base_model = LlamaForCausalLM.from_pretrained(
"alexpaul/QI-large-v1",
load_in_8bit=True,
device_map='auto',
)
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.