Safetensors
gemma2
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -20,8 +20,9 @@ This is the card for the Gemma2 9B CPT Sahabat-AI v1 base model which has underg
20
 
21
  The continued pre-training data for Gemma2 9B CPT Sahabat-AI v1 base model encompasses approximately 50B tokens.
22
 
23
- - **Co-initiated by:** PT GoTo Gojek Tokopedia Tbk, Indosat Ooredoo Hutchison
24
  - **Developed by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
 
 
25
  - **Model type:** Decoder
26
  - **Languages:** English, Indonesian, Javanese, Sundanese
27
  - **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
@@ -36,6 +37,7 @@ For the evaluation of general language capabilities, we employed the
36
  - [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
37
  - These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
38
  - We also added support for Javanese and Sundanese for the BHASA tasks whenever applicable
 
39
  - and the common English tasks from the [HuggingFace LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard).
40
  - These tasks consist of [IFEval, BBH, Math Lvl 5, GPQA, MuSR, and MMLU-PRO.](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about)
41
  - **Caveat**: Our results differ from the HuggingFace LLM Leaderboard because we have used [VLLM](https://docs.vllm.ai/en/latest/) as our inference platform. VLLM caps the context size at **4096 tokens** while HuggingFace was set to **8192 tokens**.
 
20
 
21
  The continued pre-training data for Gemma2 9B CPT Sahabat-AI v1 base model encompasses approximately 50B tokens.
22
 
 
23
  - **Developed by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
24
+ - **Funded by:** PT GoTo Gojek Tokopedia Tbk, AI Singapore
25
+ - **Supported by:** PT Indosat Ooredoo Hutchison
26
  - **Model type:** Decoder
27
  - **Languages:** English, Indonesian, Javanese, Sundanese
28
  - **License:** [Gemma Community License](https://ai.google.dev/gemma/terms)
 
37
  - [SEA HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
38
  - These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
39
  - We also added support for Javanese and Sundanese for the BHASA tasks whenever applicable
40
+ - These tasks include examination questions on Humanities, Indonesian language, Local languages and cultures, Social science and STEM across primary, middle, and high school levels.
41
  - and the common English tasks from the [HuggingFace LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard).
42
  - These tasks consist of [IFEval, BBH, Math Lvl 5, GPQA, MuSR, and MMLU-PRO.](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about)
43
  - **Caveat**: Our results differ from the HuggingFace LLM Leaderboard because we have used [VLLM](https://docs.vllm.ai/en/latest/) as our inference platform. VLLM caps the context size at **4096 tokens** while HuggingFace was set to **8192 tokens**.