ShadowTwin41
commited on
Commit
•
15e0f91
1
Parent(s):
81a3b5a
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Start with
|
2 |
+
* Model description
|
3 |
+
-> The model description provides basic details about the model. This includes the architecture, version, if it was introduced in a paper, if an original implementation is available, the author, and general information about the model. Any copyright should be attributed here. General information about training procedures, parameters, and important disclaimers can also be mentioned in this section.
|
4 |
+
|
5 |
+
* Intended uses & limitations
|
6 |
+
-> Here you describe the use cases the model is intended for, including the languages, fields, and domains where it can be applied. This section of the model card can also document areas that are known to be out of scope for the model, or where it is likely to perform suboptimally.
|
7 |
+
|
8 |
+
* How to use
|
9 |
+
-> This section should include some examples of how to use the model. This can showcase usage of the pipeline() function, usage of the model and tokenizer classes, and any other code you think might be helpful.
|
10 |
+
|
11 |
+
* Limitations and bias
|
12 |
+
-> This part should indicate which dataset(s) the model was trained on. A brief description of the dataset(s) is also welcome.
|
13 |
+
|
14 |
+
* Training data
|
15 |
+
-> In this section you should describe all the relevant aspects of training that are useful from a reproducibility perspective. This includes any preprocessing and postprocessing that were done on the data, as well as details such as the number of epochs the model was trained for, the batch size, the learning rate, and so on.
|
16 |
+
|
17 |
+
* Training procedure
|
18 |
+
-> Here you should describe the metrics you use for evaluation, and the different factors you are mesuring. Mentioning which metric(s) were used, on which dataset and which dataset split, makes it easy to compare you model’s performance compared to that of other models. These should be informed by the previous sections, such as the intended users and use cases.
|
19 |
+
|
20 |
+
* Evaluation results
|
21 |
+
-> Finally, provide an indication of how well the model performs on the evaluation dataset. If the model uses a decision threshold, either provide the decision threshold used in the evaluation, or provide details on evaluation at different thresholds for the intended uses.
|