File size: 6,713 Bytes
7bc3d81 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
base_model: llmware/bling-sheared-llama-1.3b-0.1
inference: false
license: apache-2.0
model_creator: llmware
model_name: bling-sheared-llama-1.3b-0.1
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- gguf
- ggml
- quantized
- q2_k
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# llmware/bling-sheared-llama-1.3b-0.1-GGUF
Quantized GGUF model files for [bling-sheared-llama-1.3b-0.1](https://huggingface.co/llmware/bling-sheared-llama-1.3b-0.1) from [llmware](https://huggingface.co/llmware)
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [bling-sheared-llama-1.3b-0.1.q2_k.gguf](https://huggingface.co/afrideva/bling-sheared-llama-1.3b-0.1-GGUF/resolve/main/bling-sheared-llama-1.3b-0.1.q2_k.gguf) | q2_k | 630.54 MB |
| [bling-sheared-llama-1.3b-0.1.q3_k_m.gguf](https://huggingface.co/afrideva/bling-sheared-llama-1.3b-0.1-GGUF/resolve/main/bling-sheared-llama-1.3b-0.1.q3_k_m.gguf) | q3_k_m | 703.75 MB |
| [bling-sheared-llama-1.3b-0.1.q4_k_m.gguf](https://huggingface.co/afrideva/bling-sheared-llama-1.3b-0.1-GGUF/resolve/main/bling-sheared-llama-1.3b-0.1.q4_k_m.gguf) | q4_k_m | 872.30 MB |
| [bling-sheared-llama-1.3b-0.1.q5_k_m.gguf](https://huggingface.co/afrideva/bling-sheared-llama-1.3b-0.1-GGUF/resolve/main/bling-sheared-llama-1.3b-0.1.q5_k_m.gguf) | q5_k_m | 1.00 GB |
| [bling-sheared-llama-1.3b-0.1.q6_k.gguf](https://huggingface.co/afrideva/bling-sheared-llama-1.3b-0.1-GGUF/resolve/main/bling-sheared-llama-1.3b-0.1.q6_k.gguf) | q6_k | 1.17 GB |
| [bling-sheared-llama-1.3b-0.1.q8_0.gguf](https://huggingface.co/afrideva/bling-sheared-llama-1.3b-0.1-GGUF/resolve/main/bling-sheared-llama-1.3b-0.1.q8_0.gguf) | q8_0 | 1.43 GB |
## Original Model Card:
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a Sheared-LLaMA-1.3B base model.
BLING models are fine-tuned with distilled high-quality custom instruct datasets, targeted at a specific subset of instruct tasks with
the objective of providing a high-quality Instruct model that is 'inference-ready' on a CPU laptop even
without using any advanced quantization optimizations.
### Benchmark Tests
Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--**Accuracy Score**: **84.50** correct out of 100
--Not Found Classification: 20.0%
--Boolean: 66.25%
--Math/Logic: 9.4%
--Complex Questions (1-5): 1 (Low)
--Summarization Quality (1-5): 3 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** llmware
- **Model type:** Instruct-trained decoder
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model [optional]:** princeton-nlp/Sheared-LLaMA-1.3B
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
The intended use of BLING models is two-fold:
1. Provide high-quality Instruct models that can run on a laptop for local testing. We have found it extremely useful when building a
proof-of-concept, or working with sensitive enterprise data that must be closely guarded, especially in RAG use cases.
2. Push the state of the art for smaller Instruct-following models in the sub-7B parameter range, especially 1B-3B, as single-purpose
automation tools for specific tasks through targeted fine-tuning datasets and focused "instruction" tasks.
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services,
legal and regulatory industries with complex information sources. Rather than try to be "all things to all people," BLING models try to focus on a narrower set of Instructions more suitable to a ~1B parameter GPT model.
BLING is ideal for rapid prototyping, testing, and the ability to perform an end-to-end workflow locally on a laptop without
having to send sensitive information over an Internet-based API.
The first BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types
without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
## How to Get Started with the Model
The fastest way to get started with BLING is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")
model = AutoModelForCausalLM.from_pretrained("llmware/bling-sheared-llama-1.3b-0.1")
The BLING model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "\<human>\: " + my_prompt + "\n" + "\<bot>\:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
1. Text Passage Context, and
2. Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
## Citation [optional]
This BLING model was built on top of a "Sheared Llama" model base - for more information about the "Sheared Llama" model, please see the paper referenced below:
@article{xia2023sheared,
title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi},
year={2023}
}
## Model Card Contact
Darren Oberst & llmware team
Please reach out anytime if you are interested in this project! |