doberst commited on
Commit
bb0e923
1 Parent(s): 0e98d7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -3,13 +3,13 @@ license: apache-2.0
3
  inference: false
4
  ---
5
 
6
- # bling-tiny-llama-ov
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
- **bling-tiny-llama-ov** is an OpenVino int4 quantized version of BLING Tiny-Llama 1B, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
11
 
12
- [**bling-tiny-llama**](https://huggingface.co/llmware/bling-tiny-llama-v0) is a fact-based question-answering model, optimized for complex business documents.
13
 
14
  Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
15
 
@@ -19,13 +19,13 @@ Looking for AI PC solutions and demos, contact us at [llmware](https://www.llmwa
19
  ### Model Description
20
 
21
  - **Developed by:** llmware
22
- - **Model type:** tinyllama
23
- - **Parameters:** 1.1 billion
24
- - **Model Parent:** llmware/bling-tiny-llama-v0
25
  - **Language(s) (NLP):** English
26
  - **License:** Apache 2.0
27
  - **Uses:** Fact-based question-answering
28
- - **RAG Benchmark Accuracy Score:** 86.5
29
  - **Quantization:** int4
30
 
31
 
 
3
  inference: false
4
  ---
5
 
6
+ # bling-phi-3-ov
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ **bling-phi-3-ov** is an OpenVino int4 quantized version of BLING Phi-3, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
11
 
12
+ [**bling-phi-3**](https://huggingface.co/llmware/bling-phi-3) is a fact-based question-answering model, optimized for complex business documents.
13
 
14
  Get started right away with [OpenVino](https://github.com/openvinotoolkit/openvino)
15
 
 
19
  ### Model Description
20
 
21
  - **Developed by:** llmware
22
+ - **Model type:** phi3
23
+ - **Parameters:** 3.8 billion
24
+ - **Model Parent:** llmware/bling-phi-3
25
  - **Language(s) (NLP):** English
26
  - **License:** Apache 2.0
27
  - **Uses:** Fact-based question-answering
28
+ - **RAG Benchmark Accuracy Score:** 99.5
29
  - **Quantization:** int4
30
 
31