Edit model card


slim-tags-3b is a small, specialized function-calling model fine-tuned to extract and generate meaningful tags from a chunk of text.

Tags generally correspond to named entities, but will also include key objects, entities and phrases that contribute meaningfully to the semantic meaning of the text.

The model is invoked as a specialized 'tags' classifier function that outputs a python dictionary in the form of:

    {'tags': ['NASDAQ', 'S&P', 'Dow', 'Verizon', 'Netflix, ... ']}

with the value items in the list generally being extracted from the source text.

The intended use of the model is to auto-generate tags to text that can be used to enhance search retrieval, categorization, or to extract named entities that can be used programmatically in follow-up queries or prompts. It can also be used for fact-checking as a secondary validation on a longer (separate) LLM output.

This model is fine-tuned on top of llmware/bling-stable-lm-3b-4e1t-v0, which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.

Each slim model has a 'quantized tool' version, e.g., 'slim-tags-3b-tool'.

Prompt format:

function = "classify"
params = "tags"
prompt = "<human> " + {text} + "\n" +
                      "<{function}> " + {params} + "</{function}>" + "\n<bot>:"

Transformers Script
model = AutoModelForCausalLM.from_pretrained("llmware/slim-tags-3b")
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-tags-3b")

function = "classify"
params = "tags"

text = "Citibank announced a reduction in its targets for economic growth in France and the UK last week in light of ongoing concerns about inflation and unemployment, especially in large employers such as Airbus."  

prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

inputs = tokenizer(prompt, return_tensors="pt")
start_of_input = len(inputs.input_ids[0])

outputs = model.generate(

output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

print("output only: ", output_only)  

# here's the fun part
    output_only = ast.literal_eval(llm_string_output)
    print("success - converted to python dictionary automatically")
    print("fail - could not convert to python dictionary automatically - ", llm_string_output)
Using as Function Call in LLMWare
from llmware.models import ModelCatalog
slim_model = ModelCatalog().load_model("llmware/slim-tags-3b")
response = slim_model.function_call(text,params=["tags"], function="classify")

print("llmware - llm_response: ", response)

Model Card Contact

Darren Oberst & llmware team

Join us on Discord

Downloads last month
Inference Examples
Inference API (serverless) has been turned off for this model.

Collection including llmware/slim-tags-3b