--- license: cc-by-sa-4.0 inference: false --- # SLIM-SUMMARY **slim-summary** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python dictionary with a "summary" key, and a value that consists of a list of distinct summary points. As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.     `{'summary': ['point1', 'point2', 'point3']}` This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt. For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tool'**](https://huggingface.co/llmware/slim-summary-tool). ## Prompt format: `function = "summarize"` `params = "key points (3)"` `prompt = " " + {text} + "\n" + `                       `"<{function}> " + {params} + "" + "\n:"`
Transformers Script model = AutoModelForCausalLM.from_pretrained("llmware/slim-summary") tokenizer = AutoTokenizer.from_pretrained("llmware/slim-summary") function = "summarize" params = "key points (3)" text = "Tesla stock declined yesterday 8% in premarket trading after a poorly-received event in San Francisco yesterday, in which the company indicated a likely shortfall in revenue." prompt = ": " + text + "\n" + f"<{function}> {params} \n:" inputs = tokenizer(prompt, return_tensors="pt") start_of_input = len(inputs.input_ids[0]) outputs = model.generate( inputs.input_ids.to('cpu'), eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.eos_token_id, do_sample=True, temperature=0.3, max_new_tokens=100 ) output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True) print("output only: ", output_only) # here's the fun part try: output_only = ast.literal_eval(llm_string_output) print("success - converted to python dictionary automatically") except: print("fail - could not convert to python dictionary automatically - ", llm_string_output)
Using as Function Call in LLMWare from llmware.models import ModelCatalog slim_model = ModelCatalog().load_model("llmware/slim-summary") response = slim_model.function_call(text,params=["key points (3)], function="summarize") print("llmware - llm_response: ", response)
## Model Card Contact Darren Oberst & llmware team [Join us on Discord](https://discord.gg/MhZn5Nc39h)