SLIM-RATINGS

slim-ratings is part of the SLIM ("Structured Language Instruction Model") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.

slim-ratings has been fine-tuned for rating/stars (sentiment degree) function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:

    {"rating": ["{rating score of 1(low) - 5(high)"]}

SLIM models are designed to provide a flexible natural language generative model that can be used as part of a multi-step, multi-model LLM-based automation workflow.

Each slim model has a 'quantized tool' version, e.g., 'slim-ratings-tool'.

Prompt format:

function = "classify"
params = "rating"
prompt = "<human> " + {text} + "\n" +
                      "<{function}> " + {params} + "</{function}>" + "\n<bot>:"

Transformers Script
model = AutoModelForCausalLM.from_pretrained("llmware/slim-ratings")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-ratings")

function = "classify"
params = "rating"

text = "I am extremely impressed with the quality of earnings and growth that we have seen from the company this quarter."  

prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])

outputs = model.generate(
    inputs.input_ids.to('cpu'),
    eos_token_id=tokenizer.eos_token_id,
    pad_token_id=tokenizer.eos_token_id,
    do_sample=True,
    temperature=0.3,
    max_new_tokens=100
)

output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

print("output only: ", output_only)  

# here's the fun part
try:
    output_only = ast.literal_eval(llm_string_output)
    print("success - converted to python dictionary automatically")
except:
    print("fail - could not convert to python dictionary automatically - ", llm_string_output)
Using as Function Call in LLMWare
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-ratings")
response = slim_model.function_call(text,params=["rating"], function="classify")

print("llmware - llm_response: ", response)

Model Card Contact

Darren Oberst & llmware team

Join us on Discord

Downloads last month
71
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model authors have turned it off explicitly.

Model tree for llmware/slim-ratings

Quantizations
1 model

Collection including llmware/slim-ratings