--- license: apache-2.0 inference: false --- # bling-phi-3.5 bling-phi-3.5 is part of the BLING ("Best Little Instruct No-GPU") model series, RAG-instruct trained on top of a Microsoft Phi-3.5 base model. ### Benchmark Tests Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester) 1 Test Run (temperature=0.0, sample=False) with 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations. --**Accuracy Score**: **100** correct out of 100 --Not Found Classification: 85.0% --Boolean: 95.0% --Math/Logic: 90.0% --Complex Questions (1-5): 4 (Above Average - multiple-choice, causal) --Summarization Quality (1-5): 4 (Above Average) --Hallucinations: No hallucinations observed in test runs. Note: test results were not produced with the pytorch packaging of the model, but rather the [gguf quantized version](https://www.huggingface.co/llmware/bling-phi-3.5-gguf). It is possible that there will be minor variations with this fp16 pytorch model, which is released as a base for further fine-tuning and porting to other formats (ONNX, OpenVino), while the gguf quantized version is recommended for inference (on Mac and CUDA in particular). For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo). Note: compare results with [bling-phi-3](https://www.huggingface.co/llmware/bling-phi-3). ### Model Description - **Developed by:** llmware - **Model type:** bling - **Language(s) (NLP):** English - **License:** Apache 2.0 - **Finetuned from model:** Microsoft Phi-3.5 ## Uses The intended use of BLING models is two-fold: 1. Provide high-quality RAG-Instruct models designed for fact-based, no "hallucination" question-answering in connection with an enterprise RAG workflow. 2. BLING models are fine-tuned on top of leading base foundation models, generally in the 1-3B+ range, and purposefully rolled-out across multiple base models to provide choices and "drop-in" replacements for RAG specific use cases. ### Direct Use BLING is designed for enterprise automation use cases, especially in knowledge-intensive industries, such as financial services, legal and regulatory industries with complex information sources. BLING models have been trained for common RAG scenarios, specifically: question-answering, key-value extraction, and basic summarization as the core instruction types without the need for a lot of complex instruction verbiage - provide a text passage context, ask questions, and get clear fact-based responses. ## Bias, Risks, and Limitations Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. ## How to Get Started with the Model The fastest way to get started with BLING is through direct import in transformers: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("llmware/bling-phi-3.5-gguf", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("llmware/bling-phi-3.5-gguf", trust_remote_code=True) Please refer to the generation_test .py files in the Files repository, which includes 200 samples and script to test the model. The **generation_test_llmware_script.py** includes built-in llmware capabilities for fact-checking, as well as easy integration with document parsing and actual retrieval to swap out the test set for RAG workflow consisting of business documents. The BLING model was fine-tuned with a simple "\ and \ wrapper", so to get the best results, wrap inference entries as: full_prompt = ": " + my_prompt + "\n" + ":" (As an aside, we intended to retire "human-bot" and tried several variations of the new Microsoft Phi-3 prompt template and ultimately had slightly better results with the very simple "human-bot" separators, so we opted to keep them.) The BLING model was fine-tuned with closed-context samples, which assume generally that the prompt consists of two sub-parts: 1. Text Passage Context, and 2. Specific question or instruction based on the text passage To get the best results, package "my_prompt" as follows: my_prompt = {{text_passage}} + "\n" + {{question/instruction}} ## Model Card Contact Darren Oberst & llmware team