sepiatone commited on
Commit
6597aad
1 Parent(s): 0d30a4d

update readme

Browse files
Files changed (1) hide show
  1. README.md +13 -14
README.md CHANGED
@@ -9,30 +9,32 @@ tags:
9
  model-index:
10
  - name: phi-3-mini-sft-indicqa-hindi-v0.1
11
  results: []
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- # phi-3-mini-sft-indicqa-hindi-v0.1
18
 
19
- This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on an unknown dataset.
20
 
21
- ## Model description
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
 
26
 
27
- More information needed
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
32
 
33
- ## Training procedure
34
 
35
- ### Training hyperparameters
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
@@ -45,11 +47,8 @@ The following hyperparameters were used during training:
45
  - num_epochs: 1
46
  - mixed_precision_training: Native AMP
47
 
48
- ### Training results
49
 
50
-
51
-
52
- ### Framework versions
53
 
54
  - PEFT 0.13.2
55
  - Transformers 4.44.2
 
9
  model-index:
10
  - name: phi-3-mini-sft-indicqa-hindi-v0.1
11
  results: []
12
+ datasets:
13
+ - sepiatone/ai4bharat-IndicQA-hi-202410
14
+ - ai4bharat/IndicQA
15
  ---
16
 
17
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
  should probably proofread and complete it, then remove this comment. -->
19
 
20
+ # **phi-3-mini-sft-indicqa-hindi-v0.1**
21
 
 
22
 
 
23
 
24
+ ### model description
25
 
26
+ this model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the dataset [
27
+ ai4bharat-IndicQA-hi-202410](https://huggingface.co/datasets/sepiatone/ai4bharat-IndicQA-hi-202410).
28
 
29
+ prepared by [@sepiatone](https://github.com/sepiatone).
30
 
31
+ ### intended uses & limitations
32
 
33
+ intended for an educational and non-commercial purpose.
34
 
35
+ ### training procedure
36
 
37
+ #### training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 0.0002
 
47
  - num_epochs: 1
48
  - mixed_precision_training: Native AMP
49
 
 
50
 
51
+ #### library versions
 
 
52
 
53
  - PEFT 0.13.2
54
  - Transformers 4.44.2