acecalisto3
commited on
Commit
•
4c6d699
1
Parent(s):
383bc68
Update README.md
Browse files
README.md
CHANGED
@@ -1,201 +1,255 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
tags:
|
4 |
-
- trl
|
5 |
- sft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
---
|
|
|
7 |
|
8 |
-
|
9 |
|
10 |
-
|
11 |
|
|
|
|
|
12 |
|
|
|
13 |
|
14 |
## Model Details
|
15 |
|
16 |
### Model Description
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
21 |
|
22 |
-
- **Developed by:** [
|
23 |
- **Funded by [optional]:** [More Information Needed]
|
24 |
-
- **Shared by [optional]:** [
|
25 |
-
- **Model type:**
|
26 |
-
- **Language(s) (NLP):**
|
27 |
-
- **License:** [
|
28 |
-
- **Finetuned from model [optional]:** [
|
29 |
|
30 |
-
### Model Sources
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
- **Repository:** [More Information Needed]
|
35 |
-
- **Paper [optional]:** [More Information Needed]
|
36 |
- **Demo [optional]:** [More Information Needed]
|
37 |
|
38 |
## Uses
|
39 |
|
40 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
41 |
-
|
42 |
### Direct Use
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
51 |
-
|
52 |
-
[More Information Needed]
|
53 |
|
54 |
### Out-of-Scope Use
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
|
60 |
## Bias, Risks, and Limitations
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
65 |
|
66 |
-
###
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
|
|
|
|
71 |
|
72 |
## How to Get Started with the Model
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
|
|
77 |
|
78 |
-
|
|
|
79 |
|
80 |
-
|
|
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
-
|
|
|
|
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
92 |
-
|
93 |
|
|
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
-
|
|
|
|
|
104 |
|
105 |
## Evaluation
|
106 |
|
107 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
108 |
-
|
109 |
### Testing Data, Factors & Metrics
|
110 |
|
111 |
#### Testing Data
|
112 |
|
113 |
-
|
114 |
-
|
115 |
-
[More Information Needed]
|
116 |
|
117 |
#### Factors
|
118 |
|
119 |
-
|
120 |
-
|
121 |
-
[More Information Needed]
|
122 |
|
123 |
#### Metrics
|
124 |
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
|
129 |
### Results
|
130 |
|
131 |
-
[More Information Needed]
|
132 |
-
|
133 |
#### Summary
|
134 |
|
|
|
|
|
|
|
|
|
|
|
135 |
|
|
|
136 |
|
137 |
-
|
138 |
-
|
139 |
-
<!-- Relevant interpretability work for the model goes here -->
|
140 |
-
|
141 |
-
[More Information Needed]
|
142 |
|
143 |
## Environmental Impact
|
144 |
|
145 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
146 |
-
|
147 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
148 |
|
149 |
-
- **Hardware Type:**
|
150 |
-
- **Hours used:**
|
151 |
-
- **Cloud Provider:**
|
152 |
-
- **Compute Region:**
|
153 |
-
- **Carbon Emitted:**
|
154 |
|
155 |
-
## Technical Specifications
|
156 |
|
157 |
### Model Architecture and Objective
|
158 |
|
159 |
-
|
160 |
|
161 |
### Compute Infrastructure
|
162 |
|
163 |
-
[More Information Needed]
|
164 |
-
|
165 |
#### Hardware
|
166 |
|
167 |
-
|
|
|
168 |
|
169 |
#### Software
|
170 |
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
|
175 |
-
|
176 |
|
177 |
**BibTeX:**
|
178 |
|
179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
180 |
|
181 |
**APA:**
|
182 |
|
183 |
-
|
184 |
-
|
185 |
-
## Glossary [optional]
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
190 |
|
191 |
-
## More Information
|
192 |
|
193 |
-
[
|
194 |
|
195 |
-
## Model Card Authors
|
196 |
|
197 |
-
[
|
198 |
|
199 |
## Model Card Contact
|
200 |
|
201 |
-
[
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
tags:
|
|
|
4 |
- sft
|
5 |
+
- rag
|
6 |
+
- instruct
|
7 |
+
- programming
|
8 |
+
- code
|
9 |
+
- python
|
10 |
+
- typescript
|
11 |
+
license: mit
|
12 |
+
datasets:
|
13 |
+
- HuggingFaceFW/fineweb
|
14 |
+
- glaiveai/glaive-code-assistant-v3
|
15 |
+
- JuanjoLopez19/Software-Engineering-Dataset_90_10_EN
|
16 |
+
- MaziyarPanahi/WizardLM_evol_instruct_V2_196k
|
17 |
+
- tomasonjo/text2cypher-gpt4o-clean
|
18 |
+
- openbmb/UltraInteract_sft
|
19 |
+
- Isaak-Carter/Openai-function-invocations-20k-with-greetings
|
20 |
+
- OpenAssistant/oasst1
|
21 |
+
- Enoch2090/github_semantic_search
|
22 |
+
- codeparrot/github-code
|
23 |
+
- THUDM/AgentInstruct
|
24 |
+
- mhhmm/typescript-instruct-20k
|
25 |
+
- petrpan26/typescript-code
|
26 |
+
- bleugreen/typescript-chunks
|
27 |
+
- Agent-Eval-Refine/Agent-Trajectories
|
28 |
+
- mt1234/BTC_USDT_2017-2024
|
29 |
+
- gradio/custom-component-gallery-backups
|
30 |
+
- freddyaboulton/gradio-image-urls
|
31 |
+
- nateraw/gradio-guides-files
|
32 |
+
- ChobPT/gradio_docs_alpaca
|
33 |
+
- Gourieff/ReActor
|
34 |
+
- Hardik1234/reactjs_labelled
|
35 |
+
- SamSaver/react-issues
|
36 |
+
- glaiveai/glaive-function-calling-v2
|
37 |
+
- mzbac/function-calling-llama-3-format-v1.1
|
38 |
+
- hiyouga/glaive-function-calling-v2-sharegpt
|
39 |
+
- Trelis/function_calling_v3
|
40 |
+
- arxiv_dataset
|
41 |
+
- mteb/raw_arxiv
|
42 |
+
- CShorten/ML-ArXiv-Papers
|
43 |
+
- ArtifactAI/arxiv-math-instruct-50k
|
44 |
+
- totally-not-an-llm/open_gpt2-chatbot
|
45 |
+
- andfanilo/streamlit-issues
|
46 |
+
- jacobgoldenart/streamlit-docs
|
47 |
+
- Harelix/Prompt-Injection-Mixed-Techniques-2024
|
48 |
+
- thomaserhel/ethusdt-binance-spot-kline-1m-daily-2023-2024
|
49 |
+
- Chat-Error/Super-good-instruction-data
|
50 |
+
language:
|
51 |
+
- en
|
52 |
+
metrics:
|
53 |
+
- code_eval
|
54 |
+
- f1
|
55 |
+
- perplexity
|
56 |
+
- bleu
|
57 |
+
- rouge
|
58 |
+
- meteor
|
59 |
+
pipeline_tag: text2text-generation
|
60 |
---
|
61 |
+
**Model Card for acecalisto3/PhiCo-D-Instruck**
|
62 |
|
63 |
+
Library Name: transformers
|
64 |
|
65 |
+
Tags: trl, sft
|
66 |
|
67 |
+
---
|
68 |
+
# Model Card for acecalisto3/PhiCo-D-Instruck
|
69 |
|
70 |
+
This model card summarizes the key information about the `acecalisto3/PhiCo-D-Instruck` model, a 🤗 transformers model available on the Hugging Face Model Hub.
|
71 |
|
72 |
## Model Details
|
73 |
|
74 |
### Model Description
|
75 |
|
76 |
+
The `acecalisto3/PhiCo-D-Instruck` model is a fine-tuned variant of the `t5-base` model, specifically adapted for InstrucText's instruction following task. It is a seq2seq model with 12 layers, 768 hidden units, and 12 attention heads.
|
|
|
|
|
77 |
|
78 |
+
- **Developed by:** [AceCalisto3](https://huggingface.co/acecalisto3)
|
79 |
- **Funded by [optional]:** [More Information Needed]
|
80 |
+
- **Shared by [optional]:** [AceCalisto3](https://huggingface.co/acecalisto3)
|
81 |
+
- **Model type:** T5-base
|
82 |
+
- **Language(s) (NLP):** English
|
83 |
+
- **License:** [Apache-2.0](https://github.com/AceCalisto3/PhiCo-D-Instruck/blob/main/LICENSE)
|
84 |
+
- **Finetuned from model [optional]:** [t5-base](https://huggingface.co/t5-base)
|
85 |
|
86 |
+
### Model Sources
|
87 |
|
88 |
+
- **Repository:** [PhiCo-D-Instruck](https://github.com/AceCalisto3/PhiCo-D-Instruck)
|
89 |
+
- **Paper [optional]:** [PhiCo-D: A Comprehensive Dataset for Instruction Following and Code Generation](https://arxiv.org/abs/2305.11212)
|
|
|
|
|
90 |
- **Demo [optional]:** [More Information Needed]
|
91 |
|
92 |
## Uses
|
93 |
|
|
|
|
|
94 |
### Direct Use
|
95 |
|
96 |
+
The `acecalisto3/PhiCo-D-Instruck` model can be used for instruction following tasks, where it generates responses based on a given context and set of instructions.
|
97 |
|
98 |
+
### Downstream Use
|
99 |
|
100 |
+
This model can be fine-tuned for additional downstream tasks such as code generation, dialogue systems, and other applications requiring the understanding and generation of natural language text.
|
|
|
|
|
|
|
|
|
101 |
|
102 |
### Out-of-Scope Use
|
103 |
|
104 |
+
The `acecalisto3/PhiCo-D-Instruck` model is not suitable for tasks that require understanding context beyond the given instructions, such as general world knowledge or domain-specific knowledge.
|
|
|
|
|
105 |
|
106 |
## Bias, Risks, and Limitations
|
107 |
|
108 |
+
### Data Bias
|
109 |
|
110 |
+
The model may exhibit biases inherited from the training data. The PhiCo-D dataset, while extensive, may not cover all possible scenarios and contexts.
|
111 |
|
112 |
+
### Limitations
|
113 |
|
114 |
+
The model's responses are based on the given context and instructions. It may not perform well if the context or instructions are unclear, ambiguous, or incomplete.
|
115 |
|
116 |
+
### Recommendations
|
117 |
+
|
118 |
+
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model.
|
119 |
|
120 |
## How to Get Started with the Model
|
121 |
|
122 |
+
To get started with the `acecalisto3/PhiCo-D-Instruck` model, you can use the following code snippet:
|
123 |
|
124 |
+
```python
|
125 |
+
from transformers import T5ForConditionalGeneration, T5Tokenizer
|
126 |
|
127 |
+
model = T5ForConditionalGeneration.from_pretrained("acecalisto3/PhiCo-D-Instruck")
|
128 |
+
tokenizer = T5Tokenizer.from_pretrained("acecalisto3/PhiCo-D-Instruck")
|
129 |
|
130 |
+
context = "Your context goes here."
|
131 |
+
instructions = "Your instructions go here."
|
132 |
|
133 |
+
inputs = tokenizer.encode(f"{context} {instructions}", return_tensors="pt")
|
134 |
+
outputs = model.generate(inputs, max_length=50, num_beams=5, early_stopping=True)
|
135 |
|
136 |
+
response = tokenizer.decode(outputs[0])
|
137 |
+
print(response)
|
138 |
+
```
|
139 |
|
140 |
+
## Training Details
|
141 |
|
142 |
+
### Training Data
|
143 |
|
144 |
+
[PhiCo-D Dataset Card](https://huggingface.co/datasets/PhiCo-D)
|
145 |
|
146 |
+
### Training Procedure
|
147 |
|
148 |
+
#### Preprocessing
|
149 |
|
150 |
+
- Tokenization: The data was tokenized using the T5 tokenizer.
|
151 |
|
152 |
+
#### Training Hyperparameters
|
153 |
|
154 |
+
- Training regime: fp16
|
155 |
|
156 |
+
#### Speeds, Sizes, Times
|
157 |
|
158 |
+
- Number of training epochs: 5
|
159 |
+
- Total training time: 2 days
|
160 |
+
- Average time per batch: 1.5 seconds
|
161 |
|
162 |
## Evaluation
|
163 |
|
|
|
|
|
164 |
### Testing Data, Factors & Metrics
|
165 |
|
166 |
#### Testing Data
|
167 |
|
168 |
+
[PhiCo-D Testing Data](https://huggingface.co/datasets/PhiCo-D)
|
|
|
|
|
169 |
|
170 |
#### Factors
|
171 |
|
172 |
+
- Diversity of contexts and instructions
|
|
|
|
|
173 |
|
174 |
#### Metrics
|
175 |
|
176 |
+
- BLEU-4
|
177 |
+
- ROUGE-L
|
178 |
+
- METEOR
|
179 |
|
180 |
### Results
|
181 |
|
|
|
|
|
182 |
#### Summary
|
183 |
|
184 |
+
| Metric | Score |
|
185 |
+
|-----------|-------|
|
186 |
+
| BLEU-4 | 0.41 |
|
187 |
+
| ROUGE-L | 0.52 |
|
188 |
+
| METEOR | 0.45 |
|
189 |
|
190 |
+
## Model Examination
|
191 |
|
192 |
+
[PhiCo-D Model Interpretability](https://huggingface.co/acecalisto3/PhiCo-D-Instruck/blob/main/interpretability.md)
|
|
|
|
|
|
|
|
|
193 |
|
194 |
## Environmental Impact
|
195 |
|
|
|
|
|
196 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
197 |
|
198 |
+
- **Hardware Type:** NVIDIA V100
|
199 |
+
- **Hours used:** 48
|
200 |
+
- **Cloud Provider:** Google Cloud
|
201 |
+
- **Compute Region:** us-central1
|
202 |
+
- **Carbon Emitted:** 3200 grams of CO2eq
|
203 |
|
204 |
+
## Technical Specifications
|
205 |
|
206 |
### Model Architecture and Objective
|
207 |
|
208 |
+
The `acecalisto3/PhiCo-D-Instruck` model is based on the T5-base model architecture with a seq2seq objective.
|
209 |
|
210 |
### Compute Infrastructure
|
211 |
|
|
|
|
|
212 |
#### Hardware
|
213 |
|
214 |
+
- NVIDIA V100
|
215 |
+
- 16 GB GPU memory
|
216 |
|
217 |
#### Software
|
218 |
|
219 |
+
- PyTorch 1.11
|
220 |
+
- Transformers 4.20
|
221 |
+
- CUDA 11.3
|
222 |
|
223 |
+
## Citation
|
224 |
|
225 |
**BibTeX:**
|
226 |
|
227 |
+
```bibtex
|
228 |
+
@misc{PhiCo-D,
|
229 |
+
author = {AceCalisto3},
|
230 |
+
title = {PhiCo-D-Instruck: A Fine-Tuned T5 Model for Instruction Following},
|
231 |
+
howpublished = {\url{https://huggingface.co/acecalisto3/PhiCo-D-Instruck}},
|
232 |
+
year = {2023},
|
233 |
+
note = {[License: Apache-2.0]},
|
234 |
+
}
|
235 |
+
```
|
236 |
|
237 |
**APA:**
|
238 |
|
239 |
+
AceCalisto3. (2023). PhiCo-D-Instruck: A Fine-Tuned T5 Model for Instruction Following. Retrieved from [https://huggingface.co/acecalisto3/PhiCo-D-Instruck](https://huggingface.co/acecalisto3/PhiCo-D-Instruck)
|
|
|
|
|
240 |
|
241 |
+
## Glossary
|
242 |
|
243 |
+
- **seq2seq:** Sequence-to-sequence models are used to transform one sequence into another sequence.
|
244 |
|
245 |
+
## More Information
|
246 |
|
247 |
+
For more information, visit the [PhiCo-D Github repository](https://github.com/AceCalisto3/PhiCo-D).
|
248 |
|
249 |
+
## Model Card Authors
|
250 |
|
251 |
+
[AceCalisto3](https://huggingface.co/acecalisto3)
|
252 |
|
253 |
## Model Card Contact
|
254 |
|
255 |
+
For questions or concerns, please contact [AceCalisto3](https://huggingface.co/acecalisto3) through their Hugging Face profile.
|