beomi commited on
Commit
1dde6e0
1 Parent(s): 565fd74

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +196 -0
README.md ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+ - en
5
+ - zh
6
+ - ja
7
+ license: other
8
+ library_name: transformers
9
+ license_name: gemma-terms-of-use
10
+ license_link: https://ai.google.dev/gemma/terms
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - pytorch
14
+ ---
15
+
16
+ # Gemma-Mling: Multilingual Gemma
17
+
18
+ > Update @ 2024.04.15: First release of Gemma-Mling 7B model
19
+
20
+ **Original Gemma Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
21
+
22
+ This model card corresponds to the 7B base version of the **Gemma-Mling** model.
23
+
24
+ **Resources and Technical Documentation**:
25
+
26
+ * [Original Google's Gemma-7B](https://huggingface.co/google/gemma-7b)
27
+ * [Training Code @ Github: Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM)
28
+
29
+ **Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
30
+
31
+ **Citation**
32
+
33
+ ```bibtex
34
+ @misc {gemma_mling_7b,
35
+ author = { {Junbum Lee, Taekyoon Choi} },
36
+ title = { gemma-mling-7b },
37
+ year = 2024,
38
+ url = { https://huggingface.co/beomi/gemma-mling-7b },
39
+ publisher = { Hugging Face }
40
+ }
41
+ ```
42
+
43
+ **Model Developers**: Junbum Lee (Beomi) & Taekyoon Choi (Taekyoon)
44
+
45
+ ## Model Information
46
+
47
+ ### Usage
48
+
49
+ Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
50
+
51
+ #### Running the model on a CPU
52
+
53
+ ```python
54
+ from transformers import AutoTokenizer, AutoModelForCausalLM
55
+
56
+ tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-mling-7b")
57
+ model = AutoModelForCausalLM.from_pretrained("beomi/gemma-mling-7b")
58
+
59
+ input_text = "머신러닝과 딥러닝의 차이는"
60
+ input_ids = tokenizer(input_text, return_tensors="pt")
61
+
62
+ outputs = model.generate(**input_ids)
63
+ print(tokenizer.decode(outputs[0]))
64
+ ```
65
+
66
+
67
+ #### Running the model on a single / multi GPU
68
+
69
+ ```python
70
+ # pip install accelerate
71
+ from transformers import AutoTokenizer, AutoModelForCausalLM
72
+
73
+ tokenizer = AutoTokenizer.from_pretrained("beomi/gemma-mling-7b")
74
+ model = AutoModelForCausalLM.from_pretrained("beomi/gemma-mling-7b", device_map="auto")
75
+
76
+ input_text = "머신러닝과 딥러닝의 차이는"
77
+ input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
78
+
79
+ outputs = model.generate(**input_ids)
80
+ print(tokenizer.decode(outputs[0]))
81
+ ```
82
+
83
+ ### Inputs and outputs
84
+
85
+ * **Input:** Text string, such as a question, a prompt, or a document to be
86
+ summarized.
87
+ * **Output:** Generated Multilingual-language text in response to the input, such
88
+ as an answer to a question, or a summary of a document.
89
+
90
+ ## Implementation Information
91
+
92
+ Details about the model internals.
93
+
94
+ ### Software
95
+
96
+ Training was done using [beomi/Gemma-EasyLM](https://github.com/Beomi/Gemma-EasyLM).
97
+
98
+
99
+ ## Evaluation
100
+
101
+ Model evaluation metrics and results.
102
+
103
+ ### Benchmark Results
104
+
105
+ TBD
106
+
107
+ ## Usage and Limitations
108
+
109
+ These models have certain limitations that users should be aware of.
110
+
111
+ ### Intended Usage
112
+
113
+ Open Large Language Models (LLMs) have a wide range of applications across
114
+ various industries and domains. The following list of potential uses is not
115
+ comprehensive. The purpose of this list is to provide contextual information
116
+ about the possible use-cases that the model creators considered as part of model
117
+ training and development.
118
+
119
+ * Content Creation and Communication
120
+ * Text Generation: These models can be used to generate creative text formats
121
+ such as poems, scripts, code, marketing copy, and email drafts.
122
+ * Research and Education
123
+ * Natural Language Processing (NLP) Research: These models can serve as a
124
+ foundation for researchers to experiment with NLP techniques, develop
125
+ algorithms, and contribute to the advancement of the field.
126
+ * Language Learning Tools: Support interactive language learning experiences,
127
+ aiding in grammar correction or providing writing practice.
128
+ * Knowledge Exploration: Assist researchers in exploring large bodies of text
129
+ by generating summaries or answering questions about specific topics.
130
+
131
+ ### Limitations
132
+
133
+ * Training Data
134
+ * The quality and diversity of the training data significantly influence the
135
+ model's capabilities. Biases or gaps in the training data can lead to
136
+ limitations in the model's responses.
137
+ * The scope of the training dataset determines the subject areas the model can
138
+ handle effectively.
139
+ * Context and Task Complexity
140
+ * LLMs are better at tasks that can be framed with clear prompts and
141
+ instructions. Open-ended or highly complex tasks might be challenging.
142
+ * A model's performance can be influenced by the amount of context provided
143
+ (longer context generally leads to better outputs, up to a certain point).
144
+ * Language Ambiguity and Nuance
145
+ * Natural language is inherently complex. LLMs might struggle to grasp subtle
146
+ nuances, sarcasm, or figurative language.
147
+ * Factual Accuracy
148
+ * LLMs generate responses based on information they learned from their
149
+ training datasets, but they are not knowledge bases. They may generate
150
+ incorrect or outdated factual statements.
151
+ * Common Sense
152
+ * LLMs rely on statistical patterns in language. They might lack the ability
153
+ to apply common sense reasoning in certain situations.
154
+
155
+ ### Ethical Considerations and Risks
156
+
157
+ The development of large language models (LLMs) raises several ethical concerns.
158
+ In creating an open model, we have carefully considered the following:
159
+
160
+ * Bias and Fairness
161
+ * LLMs trained on large-scale, real-world text data can reflect socio-cultural
162
+ biases embedded in the training material. These models underwent careful
163
+ scrutiny, input data pre-processing described and posterior evaluations
164
+ reported in this card.
165
+ * Misinformation and Misuse
166
+ * LLMs can be misused to generate text that is false, misleading, or harmful.
167
+ * Guidelines are provided for responsible use with the model, see the
168
+ [Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
169
+ * Transparency and Accountability:
170
+ * This model card summarizes details on the models' architecture,
171
+ capabilities, limitations, and evaluation processes.
172
+ * A responsibly developed open model offers the opportunity to share
173
+ innovation by making LLM technology accessible to developers and researchers
174
+ across the AI ecosystem.
175
+
176
+ Risks identified and mitigations:
177
+
178
+ * Perpetuation of biases: It's encouraged to perform continuous monitoring
179
+ (using evaluation metrics, human review) and the exploration of de-biasing
180
+ techniques during model training, fine-tuning, and other use cases.
181
+ * Generation of harmful content: Mechanisms and guidelines for content safety
182
+ are essential. Developers are encouraged to exercise caution and implement
183
+ appropriate content safety safeguards based on their specific product policies
184
+ and application use cases.
185
+ * Misuse for malicious purposes: Technical limitations and developer and
186
+ end-user education can help mitigate against malicious applications of LLMs.
187
+ Educational resources and reporting mechanisms for users to flag misuse are
188
+ provided. Prohibited uses of Gemma models are outlined in the
189
+ [Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
190
+ * Privacy violations: Models were trained on data filtered for removal of PII
191
+ (Personally Identifiable Information). Developers are encouraged to adhere to
192
+ privacy regulations with privacy-preserving techniques.
193
+
194
+ ## Acknowledgement
195
+
196
+ The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.