Commit
·
4f714e1
1
Parent(s):
20e2efc
Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
---
|
2 |
base_model: openlm-research/open_llama_3b
|
3 |
-
inference:
|
4 |
model_type: llama
|
5 |
prompt_template: |
|
6 |
### Instruction:\n
|
@@ -9,10 +9,32 @@ prompt_template: |
|
|
9 |
created_by: mwitiderrick
|
10 |
tags:
|
11 |
- transformers
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
# OpenLLaMA: An Open Reproduction of LLaMA
|
14 |
|
15 |
This is an [OpenLlama model](https://huggingface.co/openlm-research/open_llama_3b) that has been fine-tuned on 2 epochs of the first 5000 samples from the
|
16 |
[Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset.
|
17 |
|
18 |
-
The modified version of the dataset can be found [here](mwitiderrick/Open-Platypus)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
base_model: openlm-research/open_llama_3b
|
3 |
+
inference: true
|
4 |
model_type: llama
|
5 |
prompt_template: |
|
6 |
### Instruction:\n
|
|
|
9 |
created_by: mwitiderrick
|
10 |
tags:
|
11 |
- transformers
|
12 |
+
license: apache-2.0
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
library_name: transformers
|
16 |
+
pipeline_tag: text-generation
|
17 |
---
|
18 |
# OpenLLaMA: An Open Reproduction of LLaMA
|
19 |
|
20 |
This is an [OpenLlama model](https://huggingface.co/openlm-research/open_llama_3b) that has been fine-tuned on 2 epochs of the first 5000 samples from the
|
21 |
[Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) dataset.
|
22 |
|
23 |
+
The modified version of the dataset can be found [here](mwitiderrick/Open-Platypus)
|
24 |
+
|
25 |
+
## Usage
|
26 |
+
```python
|
27 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline
|
28 |
+
|
29 |
+
tokenizer = AutoTokenizer.from_pretrained("mwitiderrick/open_llama_3b_chat_v_0.1")
|
30 |
+
model = AutoModelForCausalLM.from_pretrained("mwitiderrick/open_llama_3b_chat_v_0.1")
|
31 |
+
query = "How can I evaluate the performance and quality of the generated text from language models?"
|
32 |
+
text_gen = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200)
|
33 |
+
output = text_gen(f"### Instruction:\n{query}### Response:\n")
|
34 |
+
print(output[0]['generated_text'])
|
35 |
+
"""
|
36 |
+
### Instruction:
|
37 |
+
How can I evaluate the performance and quality of the generated text from language models?### Response:
|
38 |
+
I want to evaluate the performance of the language model by comparing the generated text with the original text. I can use a similarity measure to compare the two texts. For example, I can use the Levenshtein distance, which measures the number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number of edits needed to transform one text into another. The Levenshtein distance between two texts is the minimum number
|
39 |
+
"""
|
40 |
+
```
|