--- library_name: transformers tags: - text-generation-inference license: cc-by-4.0 datasets: - AhmadMustafa/Urdu-Instruct-News-Article-Generation language: - ur base_model: MBZUAI/MobiLlama-05B --- # Model Card for Model ID This is Instruct Fine-tuned Version of [MobiLlama](https://arxiv.org/abs/2402.16840) Fine-tuned on [Instruct Urdu Article Generation Dataset](https://huggingface.co/datasets/AhmadMustafa/Urdu-Instruct-News-Article-Generation). Instruct Urdu Article Generation Dataset was released under [AYA Collections](https://arxiv.org/abs/2402.06619) by [Cohere for AI](cohere.for.ai) This model is finetuned for 8500 steps for generating articles in Urdu Language. Fine-Tuning Statistics: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6246908d8031dcfa9ef6d80b/Y9t_6KZ8Uloe0N16yqTPk.png) ### Model Description This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [Ahmad Mustafa Anis] - **Language(s) (NLP):** [Urdu] - **License:** [CC by 4.0] - **Finetuned from model [optional]:** [MBZUAI/MobiLlama-05B] ### Model Sources [optional] - **Repository:** [https://github.com/mbzuai-oryx/MobiLlama?tab=readme-ov-file] - **Paper [optional]:** [https://arxiv.org/abs/2402.16840] ## Uses This model is intended to use on mobile devices for generating articles in Urdu Language. ## Bias, Risks, and Limitations This model may contain biases and limitations that are present in LLMs and I have not accounted for them. ## How to Get Started with the Model Use the code below to get started with the model. ```python3 model = AutoModelForCausalLM.from_pretrained("AhmadMustafa/MobiLLama-Urdu-Article-Generation", trust_remote_code=True).to(device) tokenizer = AutoTokenizer.from_pretrained("MBZUAI/MobiLlama-05B") example = {'inputs': """ اس دی گی ایک خبر سے متعلق ایک مضمون لکھیں۔ خبر: سشانت سنگھ کیس بھارتی سپریم کورٹ نے فریقین سے مفصل جواب طلب کرلیا""", } example = f"### Instruction: {example['inputs']}\n ### Completion: " inputs = tokenizer.encode(f"{example}", return_tensors="pt").to(device) outputs = model.generate(inputs, max_new_tokens=512) print(tokenizer.decode(outputs[0])) # Output >>>جی ضرور، یہ رہا آپ کی خبر سے متعلق ایک مضمون: ممبئی بھارتی سپریم کورٹ نے بالی وڈ اداکار سشانت سنگھ راجپوت کیس کی سماعت کے دوران فریقین سے مفصل جواب طلب کرلیا بھارتی سپریم کورٹ کے جسٹس شوکت عزیز نے سشانت سنگھ راجپوت کیس کی سماعت کی سماعت کے دوران فریقین سے مفصل جواب طلب کیا سشانت سنگھ راجپوت کیس کی سماعت کے دوران فریقین سے مفصل جواب طلب کیا گیا جسٹس شوکت عزیز نے سشانت سنگھ راجپوت کیس کی سماعت کی سماعت کے دوران فریقین سے مفصل جواب طلب کیا سشانت سنگھ راجپوت کیس کی سماعت کے دوران فریقین سے مفصل جواب طلب کی ``` Please note that I have used <|EOS|> in the end of each example so you can use that as ending token to control generation. ## Citation [optional] **BibTeX:** @misc{thawakar2024mobillama, title={MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT}, author={Omkar Thawakar and Ashmal Vayani and Salman Khan and Hisham Cholakkal and Rao Muhammad Anwer and Michael Felsberg and Timothy Baldwin and Eric P. Xing and Fahad Shahbaz Khan}, year={2024}, eprint={2402.16840}, archivePrefix={arXiv}, primaryClass={cs.CL} } ## Model Card Authors [optional] - Name: Ahmad Mustafa Anis - Email: ahmadanis5050@gmail.com