Misal-7B-base-v0.1

It is a language model based on Meta's Llama2 architecture, pretrained using Marathi Text Data.

Built by - smallstep.ai

Making of Misal?

Detailed blog here.

Pretraining :

During the pretraining phase of our large language model, the model was exposed to a vast corpus of text data comprising approximately 2 billion Marathi tokens. This corpus primarily consisted of newspaper data spanning the years 2016 to 2022, sourced primarily from the CulturaX dataset. In addition to this, we supplemented our training data with additional sources such as l3cube, ai4bharat, and other internet-based datasets.

image/png

We chose bfloat16 as training precision due to stability issues with float16 precision.

License

The model inherits the license from meta-llama/Llama-2-7b.

Team

Sagar Sarkale, Prasad Mane, Shravani Chavan

Downloads last month
24
Safetensors
Model size
6.86B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train smallstepai/Misal-7B-base-v0.1

Collection including smallstepai/Misal-7B-base-v0.1