Update README.md
Browse files
README.md
CHANGED
@@ -66,21 +66,11 @@ having to send sensitive information over an Internet-based API.
|
|
66 |
The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
|
67 |
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
|
68 |
|
69 |
-
|
70 |
-
### Out-of-Scope Use
|
71 |
-
|
72 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
73 |
-
|
74 |
-
1. BLING is not designed for 'chat-bot' or 'consumer-oriented' applications.
|
75 |
-
|
76 |
-
2. BLING is not optimal for most production applications, other than simple and highly specific use cases.
|
77 |
-
|
78 |
-
|
79 |
## Bias, Risks, and Limitations
|
80 |
|
81 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
82 |
|
83 |
-
|
84 |
|
85 |
|
86 |
## How to Get Started with the Model
|
@@ -94,7 +84,7 @@ model = AutoModelForCausalLM.from_pretrained("llmware/bling-falcon-1b-0.1")
|
|
94 |
|
95 |
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
|
96 |
|
97 |
-
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:
|
98 |
|
99 |
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
|
100 |
|
|
|
66 |
The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
|
67 |
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
|
68 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
## Bias, Risks, and Limitations
|
70 |
|
71 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
72 |
|
73 |
+
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
|
74 |
|
75 |
|
76 |
## How to Get Started with the Model
|
|
|
84 |
|
85 |
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
|
86 |
|
87 |
+
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
|
88 |
|
89 |
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
|
90 |
|