avinashhm commited on
Commit
73e0f08
·
verified ·
1 Parent(s): 7cdcb9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -1,13 +1,20 @@
1
  ---
2
  library_name: transformers
3
  tags:
4
- - dolly-v2
5
- - instruction-tuning
6
- - peft
7
- - lora
 
 
 
 
 
 
 
8
  ---
9
 
10
- # Model Card for dolly-3b-lora
11
 
12
  This model is a fine-tuned version of the Dolly V2 3B language model, enhanced with Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA). It was fine-tuned on the LaMini-instruction dataset to improve its ability to follow instructions and generate coherent responses for various tasks.
13
 
@@ -18,12 +25,11 @@ This model is a fine-tuned version of the Dolly V2 3B language model, enhanced w
18
  This is a fine-tuned version of the `databricks/dolly-v2-3b` model, adapted using LoRA on the LaMini-instruction dataset. The model is designed for instruction-following tasks, leveraging the efficiency of LoRA to fine-tune approximately 2.93% of the total parameters while maintaining performance. It supports text generation tasks and has been optimized for use on GPU hardware with 8-bit quantization, with a fallback to CPU if needed.
19
 
20
  - **Developed by:** avinashhm
21
- - **Funded by [optional]:** Not specified
22
- - **Shared by [optional]:** avinashhm
23
  - **Model type:** Causal Language Model
24
  - **Language(s) (NLP):** English
25
  - **License:** Apache-2.0
26
- - **Finetuned from model [optional]:** databricks/dolly-v2-3b
27
 
28
  ### Model Sources
29
 
 
1
  ---
2
  library_name: transformers
3
  tags:
4
+ - dolly-v2
5
+ - instruction-tuning
6
+ - peft
7
+ - lora
8
+ license: apache-2.0
9
+ datasets:
10
+ - MBZUAI/LaMini-instruction
11
+ language:
12
+ - en
13
+ base_model:
14
+ - databricks/dolly-v2-3b
15
  ---
16
 
17
+ # dolly-3b-lora(Finetuned)
18
 
19
  This model is a fine-tuned version of the Dolly V2 3B language model, enhanced with Parameter-Efficient Fine-Tuning (PEFT) using Low-Rank Adaptation (LoRA). It was fine-tuned on the LaMini-instruction dataset to improve its ability to follow instructions and generate coherent responses for various tasks.
20
 
 
25
  This is a fine-tuned version of the `databricks/dolly-v2-3b` model, adapted using LoRA on the LaMini-instruction dataset. The model is designed for instruction-following tasks, leveraging the efficiency of LoRA to fine-tune approximately 2.93% of the total parameters while maintaining performance. It supports text generation tasks and has been optimized for use on GPU hardware with 8-bit quantization, with a fallback to CPU if needed.
26
 
27
  - **Developed by:** avinashhm
28
+ - **Shared by :** avinashhm
 
29
  - **Model type:** Causal Language Model
30
  - **Language(s) (NLP):** English
31
  - **License:** Apache-2.0
32
+ - **Finetuned from model :** databricks/dolly-v2-3b
33
 
34
  ### Model Sources
35