Spaces:
bfdev
/
Runtime error

bfdev akhaliq HF staff commited on
Commit
428a94f
β€’
0 Parent(s):

Duplicate from akhaliq/kogpt

Browse files

Co-authored-by: Ahsen Khaliq <akhaliq@users.noreply.huggingface.co>

Files changed (4) hide show
  1. .gitattributes +27 -0
  2. README.md +38 -0
  3. app.py +32 -0
  4. requirements.txt +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Kogpt
3
+ emoji: 🌍
4
+ colorFrom: red
5
+ colorTo: gray
6
+ sdk: gradio
7
+ app_file: app.py
8
+ pinned: false
9
+ duplicated_from: akhaliq/kogpt
10
+ ---
11
+
12
+ # Configuration
13
+
14
+ `title`: _string_
15
+ Display title for the Space
16
+
17
+ `emoji`: _string_
18
+ Space emoji (emoji-only character allowed)
19
+
20
+ `colorFrom`: _string_
21
+ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
22
+
23
+ `colorTo`: _string_
24
+ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
25
+
26
+ `sdk`: _string_
27
+ Can be either `gradio` or `streamlit`
28
+
29
+ `sdk_version` : _string_
30
+ Only applicable for `streamlit` SDK.
31
+ See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
32
+
33
+ `app_file`: _string_
34
+ Path to your main application file (which contains either `gradio` or `streamlit` Python code).
35
+ Path is relative to the root of the repository.
36
+
37
+ `pinned`: _boolean_
38
+ Whether the Space stays on top of your list.
app.py ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import torch
3
+ from transformers import AutoTokenizer, AutoModelForCausalLM
4
+
5
+
6
+ tokenizer = AutoTokenizer.from_pretrained(
7
+ 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b',
8
+ bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]'
9
+ )
10
+
11
+ model = AutoModelForCausalLM.from_pretrained(
12
+ 'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b',
13
+ pad_token_id=tokenizer.eos_token_id,
14
+ torch_dtype=torch.float16, low_cpu_mem_usage=True
15
+ ).to(device='cpu', non_blocking=True)
16
+ _ = model.eval()
17
+
18
+ title = "KoGPT"
19
+ description = "Gradio demo for KoGPT(Korean Generative Pre-trained Transformer). To use it, simply add your text, or click one of the examples to load them. Read more at the links below."
20
+ article = "<p style='text-align: center'><a href='https://github.com/kakaobrain/kogpt' target='_blank'>KoGPT: KakaoBrain Korean(hangul) Generative Pre-trained Transformer</a> | <a href='https://huggingface.co/kakaobrain/kogpt' target='_blank'>Huggingface Model</a></p>"
21
+ examples=[['μΈκ°„μ²˜λŸΌ μƒκ°ν•˜κ³ , ν–‰λ™ν•˜λŠ” \'지λŠ₯\'을 톡해 인λ₯˜κ°€ μ΄μ œκΉŒμ§€ 풀지 λͺ»ν–ˆλ˜']]
22
+ def greet(text):
23
+ prompt = text
24
+ with torch.no_grad():
25
+ tokens = tokenizer.encode(prompt, return_tensors='pt').to(device='cpu', non_blocking=True)
26
+ gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=64)
27
+ generated = tokenizer.batch_decode(gen_tokens)[0]
28
+
29
+ return generated
30
+
31
+ iface = gr.Interface(fn=greet, inputs="text", outputs="text", title=title, description=description, article=article, examples=examples,enable_queue=True)
32
+ iface.launch()
requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ transformers
2
+ sentencepiece
3
+ torch