Update README.md
Browse files
README.md
CHANGED
@@ -7,11 +7,11 @@ inference: false
|
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
-
**slim-summary** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python
|
11 |
|
12 |
As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
|
13 |
|
14 |
-
`
|
15 |
|
16 |
This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
17 |
|
@@ -68,8 +68,11 @@ For fast inference use of this model, we would recommend using the 'quantized to
|
|
68 |
output_only = ast.literal_eval(llm_string_output)
|
69 |
print("success - converted to python dictionary automatically")
|
70 |
except:
|
|
|
|
|
|
|
71 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
72 |
-
|
73 |
</details>
|
74 |
|
75 |
<details>
|
|
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
+
**slim-summary** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python list of distinct summary points.
|
11 |
|
12 |
As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
|
13 |
|
14 |
+
`['summary_point1', 'summary_point2', 'summary_point3']`
|
15 |
|
16 |
This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
17 |
|
|
|
68 |
output_only = ast.literal_eval(llm_string_output)
|
69 |
print("success - converted to python dictionary automatically")
|
70 |
except:
|
71 |
+
# note: rules-based conversion may be required - see [llmware-models.py](www.github.com/llmware-ai/llmware/blobs/main/llmware/models.py) ModelCatalog.remediate_function_call_string()
|
72 |
+
# for good example of post-processing conversion script
|
73 |
+
|
74 |
print("fail - could not convert to python dictionary automatically - ", llm_string_output)
|
75 |
+
|
76 |
</details>
|
77 |
|
78 |
<details>
|