Update strings.py
Browse files- strings.py +3 -1
strings.py
CHANGED
@@ -1,9 +1,11 @@
|
|
1 |
-
TITLE = "LLaMA
|
2 |
|
3 |
ABSTRACT = """
|
4 |
This Space allows you to play with the one of the variant(13B) as part of the [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)(Large Language Model Meta AI) released by Meta AI.
|
5 |
|
6 |
LLaMA is a general purpose language model, so it behaves differently comparing to [ChatGPT](https://openai.com/blog/chatgpt/). Even though the UI or this Space application is in Chat-like form, the generated output will be the completion of the given prompt. Because of this, your prompts should appropriately guide what to be generated.
|
|
|
|
|
7 |
"""
|
8 |
|
9 |
EXAMPLES = [
|
|
|
1 |
+
TITLE = "LLaMA 13B(Int8 Quantized) Model Playground"
|
2 |
|
3 |
ABSTRACT = """
|
4 |
This Space allows you to play with the one of the variant(13B) as part of the [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/)(Large Language Model Meta AI) released by Meta AI.
|
5 |
|
6 |
LLaMA is a general purpose language model, so it behaves differently comparing to [ChatGPT](https://openai.com/blog/chatgpt/). Even though the UI or this Space application is in Chat-like form, the generated output will be the completion of the given prompt. Because of this, your prompts should appropriately guide what to be generated.
|
7 |
+
|
8 |
+
Thanks to tloen who provided the modified code base to achieve int8 Quantization ([repo](https://github.com/tloen/llama-int8)).
|
9 |
"""
|
10 |
|
11 |
EXAMPLES = [
|