File size: 2,782 Bytes
a1c8c4c
4d4e110
 
a1c8c4c
34f26ef
bdeef4e
34f26ef
 
 
df3337e
34f26ef
df3337e
34f26ef
83c1822
34f26ef
 
bdeef4e
34f26ef
df3337e
34f26ef
 
4d4e110
34f26ef
4d4e110
df3337e
4d4e110
 
34f26ef
 
4d4e110
 
34f26ef
df3337e
 
34f26ef
4d4e110
df3337e
34f26ef
4d4e110
 
 
34f26ef
 
4d4e110
34f26ef
4d4e110
 
 
 
 
 
 
 
34f26ef
4d4e110
34f26ef
4d4e110
34f26ef
4d4e110
34f26ef
4d4e110
 
34f26ef
4d4e110
34f26ef
4d4e110
 
 
 
34f26ef
 
4d4e110
 
34f26ef
 
df3337e
 
34f26ef
 
 
4d4e110
 
34f26ef
 
 
4d4e110
34f26ef
4d4e110
34f26ef
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0  
inference: false  
---

# SLIM-EMOTIONS

<!-- Provide a quick summary of what the model is/does. -->

**slim-emotions** is part of the SLIM ("**S**tructured **L**anguage **I**nstruction **M**odel") model series, consisting of small, specialized decoder-based models, fine-tuned for function-calling.  

slim-emotions has been fine-tuned for **emotion analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:  

&nbsp;&nbsp;&nbsp;&nbsp;`{"emotions": ["proud"]}`


SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.  

Each slim model has a 'quantized tool' version, e.g.,  [**'slim-emotions-tool'**](https://huggingface.co/llmware/slim-emotions-tool).  


## Prompt format:

`function = "classify"`  
`params = "emotions"`  
`prompt = "<human> " + {text} + "\n" + `  
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp; &nbsp; &nbsp; &nbsp;`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`  


<details>
<summary>Transformers Script </summary>

    model = AutoModelForCausalLM.from_pretrained("llmware/slim-emotions")
    tokenizer = AutoTokenizer.from_pretrained("llmware/slim-emotions")

    function = "classify"
    params = "emotions"

    text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."  
    
    prompt = "<human>: " + text + "\n" + f"<{function}> {params} </{function}>\n<bot>:"

    inputs = tokenizer(prompt, return_tensors="pt")
    start_of_input = len(inputs.input_ids[0])

    outputs = model.generate(
        inputs.input_ids.to('cpu'),
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=True,
        temperature=0.3,
        max_new_tokens=100
    )

    output_only = tokenizer.decode(outputs[0][start_of_input:], skip_special_tokens=True)

    print("output only: ", output_only)  

    # here's the fun part
    try:
        output_only = ast.literal_eval(llm_string_output)
        print("success - converted to python dictionary automatically")
    except:
        print("fail - could not convert to python dictionary automatically - ", llm_string_output)
   
   </details>  
 
<details>  



    
<summary>Using as Function Call in LLMWare</summary>

    from llmware.models import ModelCatalog
    slim_model = ModelCatalog().load_model("llmware/slim-emotions")
    response = slim_model.function_call(text,params=["emotions"], function="classify")

    print("llmware - llm_response: ", response)

</details>  

    
## Model Card Contact

Darren Oberst & llmware team  

[Join us on Discord](https://discord.gg/MhZn5Nc39h)