Spaces:
Sleeping
Sleeping
File size: 1,931 Bytes
016174f 0d32d53 d32a13b d87df45 016174f f9aa63c 016174f ce67b54 e23f987 baed7cf 016174f e0250ec 0d32d53 016174f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
import argparse
import gradio as gr
from ui import chat
def main(args):
demo = gr.ChatInterface(
fn=chat,
examples=["hello", "how are you?", "What is Large Language Model?"],
title="Gradio 🤝 TGI",
description="This space is a template that you can fork/duplicate for your own usage. "
"This space let you build LLM powered idea on top of [Gradio](https://www.gradio.app/) "
"and open LLM served locally by [TGI(Text Generation Inference)](https://huggingface.co/docs/text-generation-inference/en/index). "
"Below is a placeholder Gradio ChatInterface for you to try out Mistral-7B backed by the power of TGI's efficiency. \n\n"
"To use this space for your own usecase, follow the simple steps below:\n"
"1. [Duplicate](https://huggingface.co/spaces/chansung/gradio_together_tgi/blob/main/app/main.py?duplicate=true) this space. \n"
"2. Set which LLM you wish to use (i.e. mistralai/Mistral-7B-Instruct-v0.2). \n"
"3. Inside [app/main.py](https://huggingface.co/spaces/chansung/gradio_together_tgi/blob/main/app/main.py), write Gradio application. \n"
"4. (Bonus➕) [app/gen](https://huggingface.co/spaces/chansung/gradio_together_tgi/tree/main/app/gen) provides handy utility functions "
"to aynchronously generate text by interacting with the locally served LLM.",
multimodal=False
)
demo.queue(
default_concurrency_limit=20,
max_size=256
).launch(server_name="0.0.0.0", server_port=args.port)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="This is my Gradio app's description")
parser.add_argument("--port", type=int, default=7860, help="Port to expose Gradio app")
args = parser.parse_args()
main(args)
|