doberst commited on
Commit
a8e5130
·
verified ·
1 Parent(s): 5e3037e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -0
README.md CHANGED
@@ -10,6 +10,22 @@ inference: false
10
  **dragon-qwen-7b-gguf** is a quantized version of a fact-based question answering model, optimized for complex business documents, fine-tuned on top of Qwen2 7B base, and then packaged with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
11
 
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  To pull the model via API:
14
 
15
  from huggingface_hub import snapshot_download
 
10
  **dragon-qwen-7b-gguf** is a quantized version of a fact-based question answering model, optimized for complex business documents, fine-tuned on top of Qwen2 7B base, and then packaged with 4_K_M GGUF quantization, providing a fast, small inference implementation for use on CPUs.
11
 
12
 
13
+ ### Benchmark Tests
14
+
15
+ Evaluated against the benchmark test: [RAG-Instruct-Benchmark-Tester](https://www.huggingface.co/datasets/llmware/rag_instruct_benchmark_tester)
16
+ 1 Test Run with sample=False & temperature=0.0 (deterministic output) - 1 point for correct answer, 0.5 point for partial correct or blank / NF, 0.0 points for incorrect, and -1 points for hallucinations.
17
+
18
+ --**Accuracy Score**: **99.0** correct out of 100
19
+ --Not Found Classification: 85.0%
20
+ --Boolean: 100.0%
21
+ --Math/Logic: 92.5%
22
+ --Complex Questions (1-5): 5 (Best in Class)
23
+ --Summarization Quality (1-5): 3 (Average)
24
+ --Hallucinations: No hallucinations observed in test runs.
25
+
26
+ For test run results (and good indicator of target use cases), please see the files ("core_rag_test" and "answer_sheet" in this repo).
27
+
28
+
29
  To pull the model via API:
30
 
31
  from huggingface_hub import snapshot_download