Original model card
Buy me a coffee if you like this project ;)
Description
GGML Format model files for This project.
inference
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
Original model card
license: apache-2.0
Model Card for Model ID
This is ToolLLaMA model introduced in ToolBench.
Model Details
Model Description
- License: apache-2.0
- Finetuned from model [optional]: LLaMA-7b
Uses
Refer to ToolBench.
Training Details
Trained with the new version data in ToolBench.
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.