--- datasets: - uonlp/CulturaX - l3cube-pune/MarathiNLP - ai4bharat/samanantar language: - mr tags: - marathi library_name: transformers pipeline_tag: text-generation license: llama2 --- # Misal-7B-base-v0.1 It is a language model based on Meta's Llama2 architecture, pretrained using Marathi Text Data. Built by - [smallstep.ai](https://smallstep.ai/) ## Making of Misal? Detailed blog [here](https://smallstep.ai/making-misal). ## Pretraining : During the pretraining phase of our large language model, the model was exposed to a vast corpus of text data comprising approximately 2 billion Marathi tokens. This corpus primarily consisted of newspaper data spanning the years 2016 to 2022, sourced primarily from the CulturaX dataset. In addition to this, we supplemented our training data with additional sources such as l3cube, ai4bharat, and other internet-based datasets. ![image/png](https://framerusercontent.com/images/JWniDgAly6SFTvfdHtff0cLcnWY.png?scale-down-to=1024) We chose bfloat16 as training precision due to stability issues with float16 precision. ## License The model inherits the license from meta-llama/Llama-2-7b. ## Team Sagar Sarkale, Abhijeet Katte, Prasad Mane, Shravani Chavan