File size: 4,393 Bytes
4573d98
 
4e7a993
 
 
 
 
 
 
7fcb024
4573d98
 
 
 
4e7a993
4573d98
4e7a993
4573d98
 
4e7a993
 
 
4573d98
 
 
 
 
 
 
4e7a993
 
 
 
4573d98
 
 
 
 
4e7a993
 
4573d98
 
4e7a993
4573d98
 
 
 
 
4e7a993
4573d98
 
 
 
 
4e7a993
4573d98
dd01e0e
4e7a993
4573d98
4e7a993
 
 
4573d98
4e7a993
 
4573d98
4e7a993
4573d98
4e7a993
d586ced
 
 
 
4e7a993
4573d98
d586ced
4e7a993
4573d98
 
 
 
 
 
4e7a993
 
 
 
 
 
 
 
4573d98
 
 
4e7a993
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
library_name: transformers
tags:
- text-generation-inference
license: cc-by-4.0
datasets:
- AhmadMustafa/Urdu-Instruct-News-Article-Generation
language:
- ur
base_model: MBZUAI/MobiLlama-05B
---

# Model Card for Model ID

This is Instruct Fine-tuned Version of [MobiLlama](https://arxiv.org/abs/2402.16840) Fine-tuned on [Instruct Urdu Article Generation Dataset](https://huggingface.co/datasets/AhmadMustafa/Urdu-Instruct-News-Article-Generation). Instruct Urdu Article Generation Dataset was released under [AYA Collections](https://arxiv.org/abs/2402.06619) by [Cohere for AI](cohere.for.ai)

This model is finetuned for 8500 steps for generating articles in Urdu Language.


Fine-Tuning Statistics:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6246908d8031dcfa9ef6d80b/Y9t_6KZ8Uloe0N16yqTPk.png)


### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** [Ahmad Mustafa Anis]
- **Language(s) (NLP):** [Urdu]
- **License:** [CC by 4.0]
- **Finetuned from model [optional]:** [MBZUAI/MobiLlama-05B]

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [https://github.com/mbzuai-oryx/MobiLlama?tab=readme-ov-file]
- **Paper [optional]:** [https://arxiv.org/abs/2402.16840]

## Uses
This model is intended to use on mobile devices for generating articles in Urdu Language.
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model may contain biases and limitations that are present in LLMs and I have not accounted for them.

## How to Get Started with the Model

Use the code below to get started with the model.

```python3

model = AutoModelForCausalLM.from_pretrained("AhmadMustafa/MobiLLama-Urdu-Article-Generation", trust_remote_code=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("MBZUAI/MobiLlama-05B")

example = {'inputs': """ اس دی گی ایک خبر سے متعلق ایک مضمون لکھیں۔
خبر: سشانت سنگھ کیس بھارتی سپریم کورٹ نے فریقین سے مفصل جواب طلب کرلیا""",
}

example = f"### Instruction: {example['inputs']}\n ### Completion: "
inputs = tokenizer.encode(f"{example}", return_tensors="pt").to(device)

outputs = model.generate(inputs, max_new_tokens=512)

print(tokenizer.decode(outputs[0]))

# Output
>>>جی ضرور، یہ رہا آپ کی خبر سے متعلق ایک مضمون: 
ممبئی بھارتی سپریم کورٹ نے بالی وڈ اداکار سشانت سنگھ راجپوت کیس کی سماعت کے دوران فریقین سے مفصل جواب طلب کرلیا بھارتی سپریم کورٹ کے جسٹس شوکت عزیز نے سشانت سنگھ راجپوت کیس کی سماعت کی سماعت کے دوران فریقین سے مفصل جواب طلب کیا سشانت سنگھ راجپوت کیس کی سماعت کے دوران فریقین سے مفصل جواب طلب کیا گیا جسٹس شوکت عزیز نے سشانت سنگھ راجپوت کیس کی سماعت کی سماعت کے دوران فریقین سے مفصل جواب طلب کیا سشانت سنگھ راجپوت کیس کی سماعت کے دوران فریقین سے مفصل جواب طلب کی
```


Please note that I have used <|EOS|> in the end of each example so you can use that as ending token to control generation.
## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

@misc{thawakar2024mobillama,
      title={MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT}, 
      author={Omkar Thawakar and Ashmal Vayani and Salman Khan and Hisham Cholakkal and Rao Muhammad Anwer and Michael Felsberg and Timothy Baldwin and Eric P. Xing and Fahad Shahbaz Khan},
      year={2024},
      eprint={2402.16840},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
} 

## Model Card Authors [optional]

- Name: Ahmad Mustafa Anis
- Email: ahmadanis5050@gmail.com