File size: 3,694 Bytes
0c4a974
 
 
 
 
 
 
 
e286d0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44fdc0c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e286d0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
datasets:
- NepaliAI/Nepali-Health-QA
pipeline_tag: text2text-generation
tags:
- medical
- health
---
# Model Card for Model ID: NepaliAI/NFT-6.9k

## Model Details

### Model Description

The NepaliAI/NFT-6.9k model is based on the BART (Bidirectional and Auto-Regressive Transformers) architecture, specifically utilizing the `facebook/bart-large-xsum` pre-trained model. It has been fine-tuned on the Nepali health-related question-answering dataset.

### Intended Use

The model is designed to generate answers to health-related questions provided by users. The primary language for input and output is Nepali.

### Training Data

The model has been fine-tuned on the NepaliAI health-related question-answering dataset, derived from the NepaliAI/Nepali-Health-QA dataset. The training data consists of pairs of health-related questions and their corresponding answers.

### Training Procedure

The model was trained for 5 epochs with the following training parameters:

- Learning Rate: 5e-5
- Batch Size: 2
- Gradient Accumulation Steps: 4
- FP16 (mixed-precision training): Enabled
- Optimizer: AdamW with weight decay

The training loss consistently decreased, indicating successful learning.

## Use case: 
```python
  from transformers import BartTokenizer, BartForConditionalGeneration
  
  # Load the trained model
  model = BartForConditionalGeneration.from_pretrained("NepaliAI/NFT-6.9k")
  
  # Load the tokenizer for generating new output
  tokenizer = BartTokenizer.from_pretrained("NepaliAI/NFT-6.9k")
  
  # Example text
  input_text = "के म मेरो महिनावारीको समयमा स्ट्रेप थ्रोटको लागि डाक्टरले तोकेको औषधि लिन सक्छु?"
  
  # Tokenize the input
  inputs = tokenizer(input_text, return_tensors="pt", max_length=128, truncation=True)
  
  # Generate text
  generated_text = model.generate(**inputs, max_length=256, top_p=0.95, top_k=50, do_sample=True, temperature=0.7, num_return_sequences=1, no_repeat_ngram_size=2)
  
  # Decode the generated text
  generated_response = tokenizer.batch_decode(generated_text, skip_special_tokens=True)[0]
  
  # Print the generated response
  print("Generated Response:", generated_response)

```
## Evaluation

### Metrics

No evaluation performed yet.

### Limitations

- The model's knowledge is limited to the training data, and it might not generalize well to unseen health-related questions.
- The model's responses might vary based on the complexity and diversity of input questions.

## Ethical Considerations

### Intended Use

#The model is intended for informational purposes related to health. Users should be aware that the model's responses are generated based on patterns learned from the training data and might not substitute professional medical advice.

### Bias

Care should be taken to minimize biases present in the training data. Diverse and representative datasets are crucial to mitigate biases in the model's responses.

### Privacy

The model does not store or retain user input. However, users are advised not to input sensitive or personally identifiable information.

## Future Directions

- Continuous improvement through hyperparameter tuning and model architecture exploration.
- Expansion of the training dataset to enhance the model's knowledge and performance.
- Adaptation of the model for other fields of Nepali datasets, starting with health-related datasets.

## License

This model is made available under the Hugging Face Model Hub's community guidelines and the license specified by the `facebook/bart-large-xsum` pre-trained model.

---