01GangaPutraBheeshma commited on
Commit
d725995
1 Parent(s): 1ac10c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -18
README.md CHANGED
@@ -11,33 +11,26 @@ tags:
11
  ---
12
  # Model Card for Model ID
13
 
14
- <!-- Provide a quick summary of what the model is/does. -->
15
-
16
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
17
 
18
  ## Model Details
19
 
20
  ### Model Description
21
 
22
- <!-- Provide a longer summary of what this model is. -->
23
-
24
 
25
-
26
- - **Developed by:** [More Information Needed]
27
- - **Funded by [optional]:** [More Information Needed]
28
- - **Shared by [optional]:** [More Information Needed]
29
- - **Model type:** [More Information Needed]
30
- - **Language(s) (NLP):** [More Information Needed]
31
- - **License:** [More Information Needed]
32
- - **Finetuned from model [optional]:** [More Information Needed]
33
 
34
  ### Model Sources [optional]
35
 
36
- <!-- Provide the basic links for the model. -->
37
-
38
- - **Repository:** [More Information Needed]
39
- - **Paper [optional]:** [More Information Needed]
40
- - **Demo [optional]:** [More Information Needed]
41
 
42
  ## Uses
43
 
 
11
  ---
12
  # Model Card for Model ID
13
 
14
+ 01GangaPutraBheeshma/facebook_opt2 is an open-source language model, a fine-tuned version of facebook/opt-350m, and Supervised Finetuning was used to retrain and finetune the model - a strategy inspired by offline transfer reinforcement learning. This version of Model learn from mixed-quality data without preference labels, delivering exceptional performance. Despite the simple approach, my commitment is to develop a high-performance, commercially viable, open-source large language model, and I continue to make significant strides toward this vision.
 
 
15
 
16
  ## Model Details
17
 
18
  ### Model Description
19
 
20
+ The data on which this model was trained is databricks/databricks-dolly-15k. Within this dataset, you'll discover a compilation of entries featuring a category, an instruction, a context, and a response corresponding to that instruction. The project's objective is to enhance the quality of instructions, inputs, and responses, ensuring they align seamlessly with their designated task category. All textual components should be articulate, providing genuine information. Additionally, responses should strive for completeness while maintaining conciseness.
 
21
 
22
+ - **Developed by:** Uttasarga Singh
23
+ - **Funded by [optional]:** Self
24
+ - **Shared by [optional]:** Self
25
+ - **Model type:** Decoder based Model
26
+ - **Language(s) (NLP):** English
27
+ - **License:** Meta
28
+ - **Finetuned from model [optional]:** facebook/opt-350m
 
29
 
30
  ### Model Sources [optional]
31
 
32
+ - **Repository:** https://github.com/uttasarga9067/dataset-dolly-to-the-rescue
33
+ - **Paper [optional]:** In Development
 
 
 
34
 
35
  ## Uses
36