doberst commited on
Commit
12c3649
1 Parent(s): e73b3c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -11,6 +11,8 @@ inference: false
11
 
12
  As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
13
 
 
 
14
      `['summary_point1', 'summary_point2', 'summary_point3']`
15
 
16
  This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
@@ -19,10 +21,12 @@ For fast inference use of this model, we would recommend using the 'quantized to
19
 
20
  ## Usage Tips
21
 
22
- -- Automatic (ast.literal_eval) conversion of the llm output to a python list is often complicated by the presence of '"' (ascii 34 double quotes) and "'" (ascii 39 single quote). We have provided a straightforward string remediation handler in [llmware](www.github.com/llmware-ai/llmware.git)] that automatically remediates and provides a well-formed Python dictionary. We have tried multiple ways to handle 34/39 in training - and each has a set of trade-offs - we will continue to look for ways to better automate in future releases of the model.
23
 
24
  -- If you are looking for a single output point, try the params "brief description (1)"
25
 
 
 
26
  -- Param counts are an experimental feature, but work reasonably well to guide the scope of the model's output length. At times, the model's attempt to match the target number of output points will result in some repetitive points.
27
 
28
 
@@ -68,7 +72,7 @@ For fast inference use of this model, we would recommend using the 'quantized to
68
  output_only = ast.literal_eval(llm_string_output)
69
  print("success - converted to python dictionary automatically")
70
  except:
71
- # note: rules-based conversion may be required - see [llmware-models.py](www.github.com/llmware-ai/llmware/blobs/main/llmware/models.py) ModelCatalog.remediate_function_call_string()
72
  # for good example of post-processing conversion script
73
 
74
  print("fail - could not convert to python dictionary automatically - ", llm_string_output)
 
11
 
12
  As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
13
 
14
+ Input is a text passage, and output is a list of the form:
15
+
16
      `['summary_point1', 'summary_point2', 'summary_point3']`
17
 
18
  This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
 
21
 
22
  ## Usage Tips
23
 
24
+ -- Automatic (ast.literal_eval) conversion of the llm output to a python list is often complicated by the presence of '"' (ascii 34 double quotes) and "'" (ascii 39 single quote). We have provided a straightforward string remediation handler in [llmware](https://www.github.com/llmware-ai/llmware.git) that automatically remediates and provides a well-formed Python dictionary. We have tried multiple ways to handle 34/39 in training - and each has a set of trade-offs - we will continue to look for ways to better automate in future releases of the model.
25
 
26
  -- If you are looking for a single output point, try the params "brief description (1)"
27
 
28
+ -- If the document has a lot of financial points, try the params "financial data points (5)"
29
+
30
  -- Param counts are an experimental feature, but work reasonably well to guide the scope of the model's output length. At times, the model's attempt to match the target number of output points will result in some repetitive points.
31
 
32
 
 
72
  output_only = ast.literal_eval(llm_string_output)
73
  print("success - converted to python dictionary automatically")
74
  except:
75
+ # note: rules-based conversion may be required - see [llmware-models.py](https://www.github.com/llmware-ai/llmware/blobs/main/llmware/models.py) ModelCatalog.remediate_function_call_string()
76
  # for good example of post-processing conversion script
77
 
78
  print("fail - could not convert to python dictionary automatically - ", llm_string_output)