watashihakobashi commited on
Commit
783ba2e
1 Parent(s): 0143b56

create README_en.md

Browse files
Files changed (1) hide show
  1. README_en.md +86 -0
README_en.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ language:
4
+ - ja
5
+ - en
6
+ ---
7
+
8
+ ## Model Overview
9
+ This is a comedy language model that has been enhanced by adding Japanese vocabulary to Llama2-13b and undergoing continued pre-training. It has been fine-tuned with comedy data after pre-training.
10
+ This model excels in "Ogiri," a Japanese wordplay game that involves answering questions or completing sentences in a humorous or witty manner.
11
+ Ogiri is a traditional form of Japanese entertainment that showcases one's quick wit and creativity, often performed in groups where participants try to come up with the most amusing response. Our model is capable of generating Ogiri responses in both Japanese and English, making it versatile for engaging with this unique aspect of Japanese culture in a multilingual context.
12
+ This model has been supported by the [AWS LLM Development Support Program](https://aws.amazon.com/jp/local/llm-development-support-program/). For continued pre-training, parallel training was conducted using instances equipped with AWS Trainium [trn1.32xlarge](https://aws.amazon.com/jp/ec2/instance-types/trn1/) × 4.
13
+
14
+ * License: [LLAMA 2 COMMUNITY LICENSE](https://github.com/facebookresearch/llama/blob/main/LICENSE)
15
+ * Library: [neuronx-nemo-megatron](https://github.com/aws-neuron/neuronx-nemo-megatron)
16
+
17
+ ### Tokenizer
18
+ The original tokenizer of Llama2, with a vocabulary of 32,000, has been expanded by adding 13,046 Japanese vocabulary items using BPE, bringing the total vocabulary size to 45,046. When adding vocabulary, single-character kanji tokens have been limited to those that are commonly used kanji and those that appear frequently in the training data. Tokens that pair numbers with letters and symbols with letters have been avoided by removing numbers and symbols from the data in advance.
19
+
20
+ ## Training Data
21
+ The model was pre-trained using the following corpora, with a total of 65 billion tokens:
22
+ * Japanese data from [C4](https://huggingface.co/datasets/mc4)
23
+ * Japanese data from [CC-100](https://huggingface.co/datasets/cc100)
24
+ * Japanese data from [OSCAR](https://huggingface.co/datasets/oscar)
25
+ * Japanese and English dump data from Wikipedia ([Japanese Main Page](https://ja.wikipedia.org/wiki/%E3%83%A1%E3%82%A4%E3%83%B3%E3%83%9A%E3%83%BC%E3%82%B8), [English Main Page](https://en.wikipedia.org/wiki/Main_Page))
26
+ * Proprietary company data
27
+ *
28
+ ## How to Use
29
+ ```python
30
+ import torch
31
+ from transformers import AutoModelForCausalLM, AutoTokenizer
32
+
33
+ model_name = "watashiha/Watashiha-Llama-2-13B-Ogiri-sft"
34
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
35
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
36
+
37
+ if torch.cuda.is_available():
38
+ model = model.to("cuda")
39
+
40
+ odai = "What happens when a clock is hungry?"
41
+ text = f"""
42
+ Below is a combination of instructions explaining the task and contextually relevant input. Write a response that appropriately fulfills the request.
43
+
44
+ Instructions:
45
+ The input sentence is a prompt for a comedy skit. Generate a funny punchline that aligns with the prompt.
46
+
47
+ Input:
48
+ {odai}
49
+
50
+ Response:
51
+ """
52
+ text = text.lstrip()
53
+
54
+ with torch.no_grad():
55
+ token_ids = tokenizer.encode(text, return_tensors="pt").to(model.device)
56
+ output_ids = model.generate(
57
+ token_ids,
58
+ do_sample=True,
59
+ min_new_tokens=1,
60
+ max_new_tokens=64,
61
+ top_p=0.9,
62
+ top_k=50,
63
+ temperature=0.8,
64
+ pad_token_id=tokenizer.pad_token_id,
65
+ eos_token_id=tokenizer.eos_token_id,
66
+ )
67
+ output = tokenizer.decode(output_ids.tolist()[0], skip_special_tokens=True)
68
+ print(output)
69
+ """
70
+ Below is a combination of instructions explaining the task and contextually relevant input. Write a response that appropriately fulfills the request.
71
+
72
+ Instructions:
73
+ The input sentence is a prompt for a comedy skit. Generate a funny punchline that aligns with the prompt.
74
+
75
+ Input:
76
+ What happens when a clock is hungry?
77
+
78
+ Response:
79
+ It takes time to get back on top!
80
+ """
81
+ ```
82
+
83
+ ### How to Run on AWS inf2.xlarge
84
+
85
+ As of January 24, 2024, [AWS inf2 instances](https://aws.amazon.com/ec2/instance-types/inf2/) offer a cost-effective solution for operating models with over 10 billion parameters compared to GPU instances.
86
+ The model and source code can be found [here](https://huggingface.co/watashiha/Watashiha-Llama-2-13B-Ogiri-sft-neuron).