--- license: other tags: - full datasets: - sarvamai/samvaad-hi-v1 license_name: gemma-terms-of-use license_link: https://ai.google.dev/gemma/terms base_model: google/gemma-2b model-index: - name: Gemma-2B results: [] --- # Gemma-2B-Samvaad This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the [samvaad-hi-v1 dataset](https://huggingface.co/datasets/sarvamai/samvaad-hi-v1). # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Tensoic__Gemma-2B-Samvaad) | Metric |Value| |---------------------------------|----:| |Avg. |42.55| |AI2 Reasoning Challenge (25-Shot)|46.59| |HellaSwag (10-Shot) |68.17| |MMLU (5-Shot) |33.09| |TruthfulQA (0-shot) |39.95| |Winogrande (5-shot) |61.64| |GSM8k (5-shot) | 5.84| ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 1.0 ### Framework versions - Transformers 4.39.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.16.1 - Tokenizers 0.15.0