TheBloke commited on
Commit
796e493
1 Parent(s): 0eda7ce

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -7,7 +7,7 @@ language:
7
  - en
8
  library_name: transformers
9
  license: apache-2.0
10
- model_creator: Jet Davis
11
  model_name: OpenInstruct Mistral 7B
12
  model_type: mistral
13
  pipeline_tag: text-generation
@@ -45,13 +45,13 @@ quantized_by: TheBloke
45
  <!-- header end -->
46
 
47
  # OpenInstruct Mistral 7B - GGUF
48
- - Model creator: [Jet Davis](https://huggingface.co/monology)
49
  - Original model: [OpenInstruct Mistral 7B](https://huggingface.co/monology/openinstruct-mistral-7b)
50
 
51
  <!-- description start -->
52
  ## Description
53
 
54
- This repo contains GGUF format model files for [Jet Davis's OpenInstruct Mistral 7B](https://huggingface.co/monology/openinstruct-mistral-7b).
55
 
56
  These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
57
 
@@ -80,7 +80,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
80
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openinstruct-mistral-7B-AWQ)
81
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openinstruct-mistral-7B-GPTQ)
82
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openinstruct-mistral-7B-GGUF)
83
- * [Jet Davis's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/monology/openinstruct-mistral-7b)
84
  <!-- repositories-available end -->
85
 
86
  <!-- prompt-template start -->
@@ -302,7 +302,7 @@ And thank you again to a16z for their generous grant.
302
  <!-- footer end -->
303
 
304
  <!-- original-model-card start -->
305
- # Original model card: Jet Davis's OpenInstruct Mistral 7B
306
 
307
 
308
  # OpenInstruct Mistral-7B
@@ -313,7 +313,7 @@ Quantized to FP16 and released under the [Apache-2.0](https://choosealicense.com
313
  Compute generously provided by [Higgsfield AI](https://higgsfield.ai/model/655559e6b5777dab620095e0).
314
 
315
 
316
- Prompt format: Alpaca
317
  ```
318
  Below is an instruction that describes a task. Write a response that appropriately completes the request.
319
 
@@ -323,6 +323,12 @@ Below is an instruction that describes a task. Write a response that appropriate
323
  ### Response:
324
  ```
325
 
 
 
 
 
 
 
326
  \*as of 21 Nov 2023. "commercially-usable" includes both an open-source base model and a *non-synthetic* open-source finetune dataset. updated leaderboard results available [here](https://huggingfaceh4-open-llm-leaderboard.hf.space).
327
 
328
  <!-- original-model-card end -->
 
7
  - en
8
  library_name: transformers
9
  license: apache-2.0
10
+ model_creator: Devin Gulliver
11
  model_name: OpenInstruct Mistral 7B
12
  model_type: mistral
13
  pipeline_tag: text-generation
 
45
  <!-- header end -->
46
 
47
  # OpenInstruct Mistral 7B - GGUF
48
+ - Model creator: [Devin Gulliver](https://huggingface.co/monology)
49
  - Original model: [OpenInstruct Mistral 7B](https://huggingface.co/monology/openinstruct-mistral-7b)
50
 
51
  <!-- description start -->
52
  ## Description
53
 
54
+ This repo contains GGUF format model files for [Devin Gulliver's OpenInstruct Mistral 7B](https://huggingface.co/monology/openinstruct-mistral-7b).
55
 
56
  These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
57
 
 
80
  * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openinstruct-mistral-7B-AWQ)
81
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openinstruct-mistral-7B-GPTQ)
82
  * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openinstruct-mistral-7B-GGUF)
83
+ * [Devin Gulliver's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/monology/openinstruct-mistral-7b)
84
  <!-- repositories-available end -->
85
 
86
  <!-- prompt-template start -->
 
302
  <!-- footer end -->
303
 
304
  <!-- original-model-card start -->
305
+ # Original model card: Devin Gulliver's OpenInstruct Mistral 7B
306
 
307
 
308
  # OpenInstruct Mistral-7B
 
313
  Compute generously provided by [Higgsfield AI](https://higgsfield.ai/model/655559e6b5777dab620095e0).
314
 
315
 
316
+ ## Prompt format: Alpaca
317
  ```
318
  Below is an instruction that describes a task. Write a response that appropriately completes the request.
319
 
 
323
  ### Response:
324
  ```
325
 
326
+ ## Recommended preset:
327
+ - temperature: 0.2
328
+ - top_k: 50
329
+ - top_p 0.95
330
+ - repetition_penalty: 1.1
331
+
332
  \*as of 21 Nov 2023. "commercially-usable" includes both an open-source base model and a *non-synthetic* open-source finetune dataset. updated leaderboard results available [here](https://huggingfaceh4-open-llm-leaderboard.hf.space).
333
 
334
  <!-- original-model-card end -->