not-lain commited on
Commit
6e3c170
1 Parent(s): 31f24dc

Update README.md

Browse files

better readability

Files changed (1) hide show
  1. README.md +30 -24
README.md CHANGED
@@ -77,15 +77,19 @@ Any model can provide inaccurate or incomplete information, and should be used i
77
 
78
  The fastest way to get started with BLING is through direct import in transformers:
79
 
80
- from transformers import AutoTokenizer, AutoModelForCausalLM
81
- tokenizer = AutoTokenizer.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1")
82
- model = AutoModelForCausalLM.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1")
 
 
83
 
84
  Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
85
 
86
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
87
 
88
- full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
 
 
89
 
90
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
91
 
@@ -94,42 +98,44 @@ The BLING model was fine-tuned with closed-context samples, which assume general
94
 
95
  To get the best results, package "my_prompt" as follows:
96
 
97
- my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
98
-
 
99
 
100
  If you are using a HuggingFace generation script:
101
 
102
- # prepare prompt packaging used in fine-tuning process
103
- new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
104
-
105
- inputs = tokenizer(new_prompt, return_tensors="pt")
106
- start_of_output = len(inputs.input_ids[0])
107
 
108
- # temperature: set at 0.3 for consistency of output
109
- # max_new_tokens: set at 100 - may prematurely stop a few of the summaries
110
 
111
- outputs = model.generate(
112
- inputs.input_ids.to(device),
113
- eos_token_id=tokenizer.eos_token_id,
114
- pad_token_id=tokenizer.eos_token_id,
115
- do_sample=True,
116
- temperature=0.3,
117
- max_new_tokens=100,
118
- )
119
 
120
- output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
 
 
 
 
 
 
 
121
 
 
 
122
 
123
  ## Citations
124
 
125
  This model has been fine-tuned on the base StableLM-3B-4E1T model from StabilityAI. For more information about this base model, please see the citation below:
126
-
127
  @misc{StableLM-3B-4E1T,
128
  url={[https://huggingface.co/stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)},
129
  title={StableLM 3B 4E1T},
130
  author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos}
131
  }
132
-
133
 
134
  ## Model Card Contact
135
 
 
77
 
78
  The fastest way to get started with BLING is through direct import in transformers:
79
 
80
+ ```python
81
+ from transformers import AutoTokenizer, AutoModelForCausalLM
82
+ tokenizer = AutoTokenizer.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1")
83
+ model = AutoModelForCausalLM.from_pretrained("llmware/bling-stable-lm-3b-4e1t-0.1")
84
+ ```
85
 
86
  Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
87
 
88
  The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
89
 
90
+ ```python
91
+ full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
92
+ ```
93
 
94
  The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
95
 
 
98
 
99
  To get the best results, package "my_prompt" as follows:
100
 
101
+ ```python
102
+ my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
103
+ ```
104
 
105
  If you are using a HuggingFace generation script:
106
 
107
+ ```python
108
+ # prepare prompt packaging used in fine-tuning process
109
+ new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
 
 
110
 
111
+ inputs = tokenizer(new_prompt, return_tensors="pt")
112
+ start_of_output = len(inputs.input_ids[0])
113
 
114
+ # temperature: set at 0.3 for consistency of output
115
+ # max_new_tokens: set at 100 - may prematurely stop a few of the summaries
 
 
 
 
 
 
116
 
117
+ outputs = model.generate(
118
+ inputs.input_ids.to(device),
119
+ eos_token_id=tokenizer.eos_token_id,
120
+ pad_token_id=tokenizer.eos_token_id,
121
+ do_sample=True,
122
+ temperature=0.3,
123
+ max_new_tokens=100,
124
+ )
125
 
126
+ output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
127
+ ```
128
 
129
  ## Citations
130
 
131
  This model has been fine-tuned on the base StableLM-3B-4E1T model from StabilityAI. For more information about this base model, please see the citation below:
132
+ ```
133
  @misc{StableLM-3B-4E1T,
134
  url={[https://huggingface.co/stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)},
135
  title={StableLM 3B 4E1T},
136
  author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos}
137
  }
138
+ ```
139
 
140
  ## Model Card Contact
141