doberst commited on
Commit
bb0a108
1 Parent(s): 5ef7c6a

Upload 3 files

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +59 -0
  3. config.json +15 -0
  4. slim-nli.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ slim-nli.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,62 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Model Card for Model ID
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+
9
+ **slim-sentiment-tool** is part of the SLIM ("Structured Language Instruction Model") model series, providing a set of small, specialized decoder-based LLMs, fine-tuned for function-calling.
10
+
11
+ slim-sentiment-tool is a 4_K_M quantized GGUF version of slim-sentiment-tool, providing a fast, small inference implementation.
12
+
13
+ Load in your favorite GGUF inference engine, or try with llmware as follows:
14
+
15
+ from llmware.models import ModelCatalog
16
+
17
+ sentiment_tool = ModelCatalog().load_model("llmware/slim-sentiment-tool")
18
+ response = sentiment_tool.function_call(text_sample, params=["sentiment"], function="classify")
19
+
20
+ Slim models can also be loaded even more simply as part of LLMfx calls:
21
+
22
+ from llmware.agents import LLMfx
23
+
24
+ llm_fx = LLMfx()
25
+ llm_fx.load_tool("sentiment")
26
+ response = llm_fx.sentiment(text)
27
+
28
+
29
+ ### Model Description
30
+
31
+ <!-- Provide a longer summary of what this model is. -->
32
+
33
+ - **Developed by:** llmware
34
+ - **Model type:** GGUF
35
+ - **Language(s) (NLP):** English
36
+ - **License:** Apache 2.0
37
+ - **Quantized from model:** llmware/slim-sentiment (finetuned tiny llama)
38
+
39
+ ## Uses
40
+
41
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
+
43
+ The intended use of SLIM models is to re-imagine traditional 'hard-coded' classifiers through the use of function calls.
44
+
45
+ Example:
46
+
47
+ text = "The stock market declined yesterday as investors worried increasingly about the slowing economy."
48
+
49
+ model generation - {"sentiment": ["negative"]}
50
+
51
+ keys = "sentiment"
52
+
53
+ All of the SLIM models use a novel prompt instruction structured as follows:
54
+
55
+ "<human> " + text + "<classify> " + keys + "</classify>" + "/n<bot>: "
56
+
57
+
58
+ ## Model Card Contact
59
+
60
+ Darren Oberst & llmware team
61
+
62
+
config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_name": "slim-sentiment-tool",
3
+ "quantization": "4Q_K_M GGUF",
4
+ "model_base": "tiny-llama",
5
+ "model_type": "llama",
6
+ "parameters": "1.1 billion",
7
+ "description": "slim-sentiment is a function-calling model, fine-tuned to output structured json dictionaries generally with one key 'sentiment', and a value consisting of a list, usually with a single string value - positive, negative or neutral",
8
+ "prompt_wrapper": "human_bot",
9
+ "prompt_format": "<human> {context_passage} <classify> sentiment </classify>\n<bot>:",
10
+ "output_format": "{'sentiment': ['positive']}",
11
+ "primary_keys": ["sentiment"],
12
+ "output_values": ["positive", "negative", "neutral"],
13
+ "publisher": "llmware",
14
+ "release_date": "february 2024"
15
+ }
slim-nli.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:196507f0e66d43021962b29df3d0ec2d5b377f5553af3d176a4654124eb5153f
3
+ size 668787680