Update README.md
Browse files
README.md
CHANGED
@@ -49,20 +49,11 @@ The first BLING models have been trained for common RAG scenarios, specifically:
|
|
49 |
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
|
50 |
|
51 |
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
1. BLING is not designed for 'chat-bot' or 'consumer-oriented' applications.
|
57 |
-
|
58 |
-
2. BLING is not optimal for most production applications, other than simple and highly specific use cases.
|
59 |
-
|
60 |
-
|
61 |
## Bias, Risks, and Limitations
|
62 |
|
63 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
64 |
|
65 |
-
|
66 |
|
67 |
|
68 |
## How to Get Started with the Model
|
@@ -76,7 +67,7 @@ model = AutoModelForCausalLM.from_pretrained("llmware/bling-1.4b-0.1")
|
|
76 |
|
77 |
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
|
78 |
|
79 |
-
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:
|
80 |
|
81 |
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
|
82 |
|
|
|
49 |
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
|
50 |
|
51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
## Bias, Risks, and Limitations
|
53 |
|
54 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
55 |
|
56 |
+
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
|
57 |
|
58 |
|
59 |
## How to Get Started with the Model
|
|
|
67 |
|
68 |
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
|
69 |
|
70 |
+
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
|
71 |
|
72 |
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
|
73 |
|