SebastianSchramm
commited on
Commit
•
9be4774
1
Parent(s):
852b08d
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- LLM
|
8 |
+
- Universal-NER
|
9 |
+
- NER
|
10 |
+
inference: false
|
11 |
+
---
|
12 |
+
|
13 |
+
# Quantized version of Universal-NER/UniNER-7B-all
|
14 |
+
|
15 |
+
[Universal-NER/UniNER-7B-all](https://huggingface.co/Universal-NER/UniNER-7B-all) quantized to 4bit with GPTQ and stored with 1GB shard size.
|
16 |
+
|
17 |
+
## Model Description
|
18 |
+
|
19 |
+
The model [Universal-NER/UniNER-7B-all](https://huggingface.co/Universal-NER/UniNER-7B-all) was quantized to 4bit, group_size 128, and ascending_order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).
|
20 |
+
|
21 |
+
## Evaluation
|
22 |
+
TODO
|
23 |
+
|
24 |
+
## Prompt template
|
25 |
+
|
26 |
+
Prompt template is the same as for the full precision model:
|
27 |
+
|
28 |
+
```python
|
29 |
+
prompt_template = """A virtual assistant answers questions from a user based on the provided text.
|
30 |
+
USER: Text: {input_text}
|
31 |
+
ASSISTANT: I’ve read this text.
|
32 |
+
USER: What describes {entity_name} in the text?
|
33 |
+
ASSISTANT:
|
34 |
+
"""
|
35 |
+
```
|
36 |
+
|
37 |
+
## Usage
|
38 |
+
|
39 |
+
It is recommended to format input according to the prompt template mentioned above during inference for best results.
|
40 |
+
|
41 |
+
```python
|
42 |
+
prompt = prompt_template.format_map({"input_text": "Cologne is a great city in Germany - maybe even the greatest ;)", "entity_name": "city"})
|
43 |
+
```
|