Edit model card

Model is finetuned from UltraChat version of Gemma-7b (CorticalStack/gemma-7b-ultrachat-sft) and is finetuned on 9 Indian languages (Hindi, Tamil, Punjabi, Bengali, Gujarati, Oriya, Telugu, Kannada, Malayalam) plus English.

The model is trained on close sourced GenZ_Vikas dataset, created entirely by university students ageing (18-22), hence the name GenZ. Which comprises of 5.5 million Hindi instruction sets and 0.5 million instruction sets in rest of the languages plus English.

The model was trained on single A100 for 9 days, 17 hours.

And is benchmarked on Indic LLM leaderboard:- https://huggingface.co/spaces/Cognitive-Lab/indic_llm_leaderboard Where it outperforms our previous models (GemmaOrca and GemmaUltra) on Hindi benchmarks. And also scores above Meta-llama-3 on all currenty available benchmarks (ARC, Hellaswag) in Hindi language.

Release notes:- https://www.linkedin.com/feed/update/urn:li:activity:7188399797291175936

Downloads last month
89
Safetensors
Model size
8.54B params
Tensor type
FP16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.