Update README.md
Browse files
README.md
CHANGED
@@ -59,7 +59,7 @@ Gemma 2 2b was trained on a wide dataset of 2 trillion tokens, which is an incre
|
|
59 |
- Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions.
|
60 |
- Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries.
|
61 |
|
62 |
-
For more details check out their blog post here: https://
|
63 |
|
64 |
## Special thanks
|
65 |
|
|
|
59 |
- Code: Exposing the model to code helps it to learn the syntax and patterns of programming languages, which improves its ability to generate code or understand code-related questions.
|
60 |
- Mathematics: Training on mathematical text helps the model learn logical reasoning, symbolic representation, and to address mathematical queries.
|
61 |
|
62 |
+
For more details check out their blog post here: https://developers.googleblog.com/en/smaller-safer-more-transparent-advancing-responsible-ai-with-gemma/
|
63 |
|
64 |
## Special thanks
|
65 |
|