Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ tags: []
|
|
12 |
**Llama-3-SURPASSONE-JP-8B** is a large language model trained by [SURPASSONE, Inc](https://surpassone.com/).
|
13 |
Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), this model has undergone additional post-training of Japanese to expand instruction-following capabilities in Japanese.
|
14 |
|
15 |
-
This model is specialized in generating MCQ questions with options, correct answers and corresponding explanations on “Nursing Care” if given a specific topic.
|
16 |
|
17 |
For more details, please refer to [our blog post](https://docs.google.com/document/d/1ENAEzgV3n-sFiSoV3oQBTgzjeyTfmL64zEczepTKEW0/edit?usp=sharing).
|
18 |
|
@@ -71,31 +71,6 @@ inputs = tokenizer(
|
|
71 |
)
|
72 |
], return_tensors = "pt").to("cuda")
|
73 |
|
74 |
-
from transformers import TextStreamer
|
75 |
-
text_streamer = TextStreamer(tokenizer)
|
76 |
-
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
|
77 |
-
|
78 |
-
#for QA generation
|
79 |
-
|
80 |
-
# Define the formatting function and the prompt template
|
81 |
-
alpaca_prompt = """以下は質問です。質問に適切に答える回答を書いてください。
|
82 |
-
|
83 |
-
### 質問:
|
84 |
-
{}
|
85 |
-
|
86 |
-
### 答え:
|
87 |
-
{}"""
|
88 |
-
|
89 |
-
eos_token_id = tokenizer.eos_token_id
|
90 |
-
|
91 |
-
inputs = tokenizer(
|
92 |
-
[alpaca_prompt.format(
|
93 |
-
"介護福祉士はどのような責任を負うべきですか?", # Question
|
94 |
-
"" # Answer - leave this blank for generation!
|
95 |
-
)],
|
96 |
-
return_tensors="pt"
|
97 |
-
).to("cuda")
|
98 |
-
|
99 |
from transformers import TextStreamer
|
100 |
text_streamer = TextStreamer(tokenizer)
|
101 |
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
|
|
|
12 |
**Llama-3-SURPASSONE-JP-8B** is a large language model trained by [SURPASSONE, Inc](https://surpassone.com/).
|
13 |
Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), this model has undergone additional post-training of Japanese to expand instruction-following capabilities in Japanese.
|
14 |
|
15 |
+
This model is specialized in generating MCQ questions with options, correct answers and corresponding explanations on “Nursing Care” if given a specific topic.
|
16 |
|
17 |
For more details, please refer to [our blog post](https://docs.google.com/document/d/1ENAEzgV3n-sFiSoV3oQBTgzjeyTfmL64zEczepTKEW0/edit?usp=sharing).
|
18 |
|
|
|
71 |
)
|
72 |
], return_tensors = "pt").to("cuda")
|
73 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
74 |
from transformers import TextStreamer
|
75 |
text_streamer = TextStreamer(tokenizer)
|
76 |
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 1028)
|