Update README.md
Browse files
README.md
CHANGED
@@ -1,27 +1,23 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
inference: false
|
4 |
-
tags: [green, p1, llmware-fx, ov, emerald]
|
5 |
---
|
6 |
|
7 |
-
# slim-summary-
|
8 |
|
9 |
-
**slim-summary-
|
10 |
-
|
11 |
-
This is an OpenVino int4 quantized version of slim-summary-tiny, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
|
12 |
|
|
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
- **Developed by:** llmware
|
17 |
-
- **Model type:**
|
18 |
-
- **Parameters:**
|
19 |
-
- **Model Parent:** llmware/slim-summary-
|
20 |
- **Language(s) (NLP):** English
|
21 |
- **License:** Apache 2.0
|
22 |
- **Uses:** Summary bulletpoints extracted from complex business documents
|
23 |
-
- **RAG Benchmark Accuracy Score:** NA
|
24 |
-
- **Quantization:** int4
|
25 |
|
26 |
|
27 |
## Model Card Contact
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
inference: false
|
|
|
4 |
---
|
5 |
|
6 |
+
# slim-summary-phi-3
|
7 |
|
8 |
+
**slim-summary-phi-3** is a specialized function calling model that summarizes a given text and generates as output a Python list of summary points.
|
|
|
|
|
9 |
|
10 |
+
This is the base Pytorch version of the model, useful for further fine-tuning. For faster inference, we would recommend using either the GGUF or OpenVino version of the model.
|
11 |
|
12 |
### Model Description
|
13 |
|
14 |
- **Developed by:** llmware
|
15 |
+
- **Model type:** phi-3
|
16 |
+
- **Parameters:** 3.8 billion
|
17 |
+
- **Model Parent:** llmware/slim-summary-phi-3
|
18 |
- **Language(s) (NLP):** English
|
19 |
- **License:** Apache 2.0
|
20 |
- **Uses:** Summary bulletpoints extracted from complex business documents
|
|
|
|
|
21 |
|
22 |
|
23 |
## Model Card Contact
|