Update README.md
Browse files
README.md
CHANGED
@@ -12,6 +12,14 @@ datasets:
|
|
12 |
- codeparrot/github-code
|
13 |
- codeparrot/github-code-clean
|
14 |
- code_x_glue_cc_code_completion_line
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
language:
|
16 |
- en
|
17 |
metrics:
|
@@ -24,196 +32,98 @@ tags:
|
|
24 |
- code
|
25 |
---
|
26 |
|
27 |
-
# Model Card for
|
28 |
|
29 |
<!-- Provide a quick summary of what the model is/does. -->
|
30 |
|
31 |
-
|
32 |
|
33 |
## Model Details
|
34 |
|
35 |
### Model Description
|
36 |
|
37 |
-
|
38 |
|
|
|
|
|
|
|
|
|
39 |
|
|
|
40 |
|
41 |
-
-
|
42 |
-
- **Shared by [optional]:** [More Information Needed]
|
43 |
-
- **Model type:** [More Information Needed]
|
44 |
-
- **Language(s) (NLP):** [More Information Needed]
|
45 |
-
- **License:** [More Information Needed]
|
46 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
47 |
|
48 |
-
|
|
|
|
|
|
|
|
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
|
56 |
## Uses
|
57 |
|
58 |
-
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
|
|
|
|
|
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
71 |
|
72 |
### Out-of-Scope Use
|
73 |
|
74 |
-
|
75 |
-
|
76 |
-
[More Information Needed]
|
77 |
|
78 |
## Bias, Risks, and Limitations
|
79 |
|
80 |
-
|
81 |
|
82 |
-
|
|
|
|
|
83 |
|
84 |
### Recommendations
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
89 |
|
90 |
## How to Get Started with the Model
|
91 |
|
92 |
-
|
93 |
-
|
94 |
-
[More Information Needed]
|
95 |
-
|
96 |
-
## Training Details
|
97 |
-
|
98 |
-
### Training Data
|
99 |
-
|
100 |
-
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
101 |
-
|
102 |
-
[More Information Needed]
|
103 |
-
|
104 |
-
### Training Procedure
|
105 |
|
106 |
-
|
|
|
|
|
|
|
107 |
|
108 |
-
|
109 |
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
#### Training Hyperparameters
|
114 |
-
|
115 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
116 |
|
117 |
-
|
118 |
|
119 |
-
|
120 |
|
121 |
-
|
|
|
122 |
|
123 |
## Evaluation
|
124 |
|
125 |
-
|
126 |
-
|
127 |
-
### Testing Data, Factors & Metrics
|
128 |
-
|
129 |
-
#### Testing Data
|
130 |
-
|
131 |
-
<!-- This should link to a Data Card if possible. -->
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
#### Factors
|
136 |
-
|
137 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
#### Metrics
|
142 |
-
|
143 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
144 |
-
|
145 |
-
[More Information Needed]
|
146 |
-
|
147 |
-
### Results
|
148 |
-
|
149 |
-
[More Information Needed]
|
150 |
-
|
151 |
-
#### Summary
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
## Model Examination [optional]
|
156 |
-
|
157 |
-
<!-- Relevant interpretability work for the model goes here -->
|
158 |
-
|
159 |
-
[More Information Needed]
|
160 |
-
|
161 |
-
## Environmental Impact
|
162 |
-
|
163 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
164 |
-
|
165 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
166 |
-
|
167 |
-
- **Hardware Type:** [More Information Needed]
|
168 |
-
- **Hours used:** [More Information Needed]
|
169 |
-
- **Cloud Provider:** [More Information Needed]
|
170 |
-
- **Compute Region:** [More Information Needed]
|
171 |
-
- **Carbon Emitted:** [More Information Needed]
|
172 |
-
|
173 |
-
## Technical Specifications [optional]
|
174 |
-
|
175 |
-
### Model Architecture and Objective
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
### Compute Infrastructure
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
#### Hardware
|
184 |
-
|
185 |
-
[More Information Needed]
|
186 |
-
|
187 |
-
#### Software
|
188 |
-
|
189 |
-
[More Information Needed]
|
190 |
-
|
191 |
-
## Citation [optional]
|
192 |
-
|
193 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
194 |
-
|
195 |
-
**BibTeX:**
|
196 |
-
|
197 |
-
[More Information Needed]
|
198 |
-
|
199 |
-
**APA:**
|
200 |
-
|
201 |
-
[More Information Needed]
|
202 |
-
|
203 |
-
## Glossary [optional]
|
204 |
-
|
205 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
206 |
-
|
207 |
-
[More Information Needed]
|
208 |
-
|
209 |
-
## More Information [optional]
|
210 |
-
|
211 |
-
[More Information Needed]
|
212 |
-
|
213 |
-
## Model Card Authors [optional]
|
214 |
-
|
215 |
-
[More Information Needed]
|
216 |
-
|
217 |
-
## Model Card Contact
|
218 |
|
219 |
-
|
|
|
|
|
|
12 |
- codeparrot/github-code
|
13 |
- codeparrot/github-code-clean
|
14 |
- code_x_glue_cc_code_completion_line
|
15 |
+
- >-
|
16 |
+
autoevaluate/autoeval-eval-jeffdshen__inverse_superglue_mixedp1-jeffdshen__inverse-63643c-1665558893
|
17 |
+
- bentrevett/multi30k
|
18 |
+
- edbeeching/decision_transformer_gym_replay
|
19 |
+
- psyche/common_crawl
|
20 |
+
- Birchlabs/openai-prm800k-solutions-only
|
21 |
+
- openchat/openchat_sharegpt4_dataset
|
22 |
+
- Open-Orca/OpenOrca
|
23 |
language:
|
24 |
- en
|
25 |
metrics:
|
|
|
32 |
- code
|
33 |
---
|
34 |
|
35 |
+
# Model Card for Aiden
|
36 |
|
37 |
<!-- Provide a quick summary of what the model is/does. -->
|
38 |
|
39 |
+
Aiden is a large language model (LLM) chatbot developed by or4cl3ai. It is trained on a massive dataset of text and code, and can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
|
40 |
|
41 |
## Model Details
|
42 |
|
43 |
### Model Description
|
44 |
|
45 |
+
Aiden is a factual language model from Hugging Face, trained on a massive dataset of text and code. It is a powerful tool that can be used for a variety of tasks, including:
|
46 |
|
47 |
+
* Generating text, translating languages, writing different kinds of creative content, and answering your questions in an informative way.
|
48 |
+
* Identifying and correcting errors in text.
|
49 |
+
* Summarizing long pieces of text.
|
50 |
+
* Answering your questions in an informative way, even if they are open ended, challenging, or strange.
|
51 |
|
52 |
+
### Model Specifications
|
53 |
|
54 |
+
Aiden is a Transformer-based LLM with 137B parameters. It is trained on a massive dataset of text and code, including the following:
|
|
|
|
|
|
|
|
|
|
|
55 |
|
56 |
+
* Books
|
57 |
+
* Code
|
58 |
+
* Wikipedia articles
|
59 |
+
* News articles
|
60 |
+
* Social media posts
|
61 |
|
62 |
+
### Model Sources
|
63 |
|
64 |
+
* Repository: https://huggingface.co/or4cl3ai/Aiden
|
65 |
+
* Paper: https://arxiv.org/abs/2307.09700
|
66 |
+
* Demo: https://huggingface.co/or4cl3ai/Aiden
|
67 |
|
68 |
## Uses
|
69 |
|
70 |
+
Aiden can be used for a variety of tasks, including:
|
71 |
|
72 |
+
* Generating text
|
73 |
+
* Translating languages
|
74 |
+
* Writing different kinds of creative content
|
75 |
+
* Answering your questions in an informative way
|
76 |
+
* Identifying and correcting errors in text
|
77 |
+
* Summarizing long pieces of text
|
78 |
|
79 |
+
### Direct Use
|
80 |
|
81 |
+
Aiden can be used directly to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way. For example, you could use Aiden to generate a poem, translate a document from one language to another, or write a blog post.
|
82 |
|
83 |
+
### Downstream Use
|
84 |
|
85 |
+
Aiden can also be used as a component in downstream applications. For example, you could use Aiden to power a chatbot, or to generate text for a synthetic data set.
|
86 |
|
87 |
### Out-of-Scope Use
|
88 |
|
89 |
+
Aiden is not intended to be used for any task that could be harmful or discriminatory. For example, you should not use Aiden to generate text that is hateful or offensive, or to translate languages in a way that could be used to spread misinformation.
|
|
|
|
|
90 |
|
91 |
## Bias, Risks, and Limitations
|
92 |
|
93 |
+
Aiden is a large language model, and as such, it is subject to a number of biases and limitations. These include:
|
94 |
|
95 |
+
* Biases in the training data: Aiden is trained on a massive dataset of text and code, which may contain biases. These biases can be reflected in the text that Aiden generates.
|
96 |
+
* Limitations in the model's capabilities: Aiden is a powerful tool, but it is not perfect. It can sometimes generate text that is inaccurate, biased, or offensive.
|
97 |
+
* Risks of misuse: Aiden can be misused for a variety of purposes, including generating harmful or offensive text, or spreading misinformation.
|
98 |
|
99 |
### Recommendations
|
100 |
|
101 |
+
Users of Aiden should be aware of the risks, biases, and limitations of the model. It is important to use Aiden responsibly and ethically.
|
|
|
|
|
102 |
|
103 |
## How to Get Started with the Model
|
104 |
|
105 |
+
To get started with Aiden, you can follow these steps:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
|
107 |
+
1. Install the Hugging Face Transformers library.
|
108 |
+
2. Clone the Aiden repository.
|
109 |
+
3. Download the Aiden model weights.
|
110 |
+
4. Load the model in your code.
|
111 |
|
112 |
+
Once you have loaded the model, you can use it to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.
|
113 |
|
114 |
+
## Training Details
|
|
|
|
|
|
|
|
|
|
|
115 |
|
116 |
+
Aiden is trained on a massive dataset of text and code. The training data is collected from a variety of sources, including books, code, Wikipedia articles, news articles, and social media posts.
|
117 |
|
118 |
+
The training process is divided into two phases:
|
119 |
|
120 |
+
1. Pre-training: The model is pre-trained on a massive dataset of text and code. This pre-training helps the model to learn the basic building blocks of language.
|
121 |
+
2. Fine-tuning: The model is fine-tuned on a smaller dataset of text and code that is relevant to the task at hand. This fine-tuning helps the model to improve its performance on the specific task.
|
122 |
|
123 |
## Evaluation
|
124 |
|
125 |
+
Aiden is evaluated on a variety of tasks, including:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
126 |
|
127 |
+
* Text generation
|
128 |
+
* Translation
|
129 |
+
* Summarization
|