license: apache-2.0
Model Card for Model ID
bling-tiny-llama-v0 is part of the BLING ("Best Little Instruct No-GPU-required...") model series, RAG-instruct trained on top of a TinyLlama-1.1b base model.
BLING models are fine-tuned with high-quality custom instruct datasets, designed for rapid testing and prototyping in RAG scenarios.
Benchmark Tests
Evaluated against the benchmark test: RAG-Instruct-Benchmark-Tester
Average of 2 Test Runs with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
--Accuracy Score: 86.5 correct out of 100
--Not Found Classification: 85.0%
--Boolean: 82.50%
--Math/Logic: 37.50%
--Complex Questions (1-5): 3 (Medium-High: multiple choice, table reading, causal)
--Summarization Quality (1-5): 3 (Coherent, extractive)
--Hallucinations: No hallucinations observed in test runs.
For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
Model Description
- Developed by: llmware
- Model type: TinyLlama
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: TinyLlama-1.1b - 2.5T checkpoint
Uses
Direct Use
BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources.
BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses.
Bias, Risks, and Limitations
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms.
How to Get Started with the Model
The fastest way to get started with dRAGon is through direct import in transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("bling-tiny-llama-v0", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("bling-tiny-llama-v0", trust_remote_code=True)
Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The generation_test_llmware_script.py includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents.
The dRAGon model was fine-tuned with a simple "<human> and <bot> wrapper", so to get the best results, wrap inference entries as:
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:"
The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts:
- Text Passage Context, and
- Specific question or instruction based on the text passage
To get the best results, package "my_prompt" as follows:
my_prompt = {{text_passage}} + "\n" + {{question/instruction}}
If you are using a HuggingFace generation script:
# prepare prompt packaging used in fine-tuning process
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:"
inputs = tokenizer(new_prompt, return_tensors="pt")
start_of_output = len(inputs.input_ids[0])
# temperature: set at 0.3 for consistency of output
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries
outputs = model.generate(
inputs.input_ids.to(device),
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
do_sample=True,
temperature=0.3,
max_new_tokens=100,
)
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True)
Model Card Contact
Darren Oberst & llmware team