doberst commited on
Commit
c1e21e4
1 Parent(s): 481cf82

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -17,6 +17,14 @@ This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned o
17
 
18
  For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tool'**](https://huggingface.co/llmware/slim-summary-tool).
19
 
 
 
 
 
 
 
 
 
20
 
21
  ## Prompt format:
22
 
 
17
 
18
  For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tool'**](https://huggingface.co/llmware/slim-summary-tool).
19
 
20
+ ## Usage Tips
21
+
22
+ -- Automatic (ast.literal_eval) conversion of the llm output to a python list is often complicated by the presence of '"' (ascii 34 double quotes) and "'" (ascii 39 single quote). We have provided a straightforward string remediation handler in [llmware](www.github.com/llmware-ai/llmware.git)] that automatically remediates and provides a well-formed Python dictionary. We have tried multiple ways to handle 34/39 in training - and each has a set of trade-offs - we will continue to look for ways to better automate in future releases of the model.
23
+
24
+ -- If you are looking for a single output point, try the params "brief description (1)"
25
+
26
+ -- Param counts are an experimental feature, but work reasonably well to guide the scope of the model's output length. At times, the model's attempt to match the target number of output points will result in some repetitive points.
27
+
28
 
29
  ## Prompt format:
30