Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,9 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
@@ -17,88 +20,79 @@ tags: []
|
|
17 |
|
18 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
|
28 |
-
### Model Sources [optional]
|
29 |
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
|
|
|
|
|
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
### Downstream Use [optional]
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
|
|
55 |
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
Use the
|
73 |
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
|
|
79 |
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
- **Training regime:**
|
96 |
|
97 |
#### Speeds, Sizes, Times [optional]
|
98 |
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
@@ -108,39 +102,22 @@ Use the code below to get started with the model.
|
|
108 |
|
109 |
#### Testing Data
|
110 |
|
111 |
-
|
112 |
|
113 |
[More Information Needed]
|
114 |
|
115 |
#### Factors
|
116 |
|
117 |
-
|
118 |
-
|
119 |
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
|
145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
@@ -169,12 +146,19 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
|
|
169 |
[More Information Needed]
|
170 |
|
171 |
## Citation [optional]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
-
**BibTeX:**
|
176 |
|
177 |
-
|
|
|
178 |
|
179 |
**APA:**
|
180 |
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
language:
|
5 |
+
- am
|
6 |
+
- ti
|
7 |
---
|
8 |
|
9 |
# Model Card for Model ID
|
|
|
20 |
|
21 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
22 |
|
23 |
+
Developed by: [Your Name or Organization]
|
24 |
+
Funded by: [Optional: Funding Information]
|
25 |
+
Shared by: [Optional: Sharing Information]
|
26 |
+
Model type: XLM-RoBERTa for Sequence Classification
|
27 |
+
Language(s) (NLP): [Language(s) of the dataset, e.g., Tigrinya, Amharic]
|
28 |
+
License: [ Apache 2.0]
|
29 |
+
Finetuned from model: xlm-roberta-base
|
30 |
|
|
|
31 |
|
|
|
32 |
|
33 |
+
### Model Sources [optional]
|
|
|
|
|
34 |
|
35 |
+
Repository: [soon will be available]
|
36 |
+
Paper: [soon will be available]
|
37 |
+
Demo: [soon will be available]
|
38 |
## Uses
|
39 |
|
|
|
40 |
|
41 |
### Direct Use
|
42 |
|
43 |
+
This model can be used for sequence classification tasks, such as sentiment analysis or text classification.
|
|
|
|
|
44 |
|
45 |
### Downstream Use [optional]
|
46 |
|
47 |
+
Can be fine-tuned further for specific classification tasks or domains.
|
|
|
|
|
48 |
|
49 |
### Out-of-Scope Use
|
50 |
|
51 |
+
Ensure not to use this model for tasks where biased or sensitive language handling is crucial without further validation.
|
52 |
+
|
53 |
|
|
|
54 |
|
55 |
## Bias, Risks, and Limitations
|
56 |
|
57 |
+
The model may exhibit biases present in the training data. Users should evaluate its performance carefully in their specific application to avoid reinforcing unwanted biases.
|
58 |
|
|
|
59 |
|
60 |
### Recommendations
|
61 |
|
62 |
+
Users should assess the model's performance in their specific use case, especially considering any potential biases or limitations.
|
63 |
|
64 |
+
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.
|
65 |
|
66 |
## How to Get Started with the Model
|
67 |
|
68 |
+
Use the provided tokenizer and model to load and use the model for sequence classification tasks. Fine-tuning on your dataset can be achieved using the provided code snippet.
|
69 |
|
70 |
+
from transformers import XLMRobertaTokenizer, XLMRobertaForSequenceClassification
|
71 |
+
|
72 |
+
model_name = "Hailay/FT_EXLMR"
|
73 |
+
tokenizer = XLMRobertaTokenizer.from_pretrained(model_name)
|
74 |
+
model = XLMRobertaForSequenceClassification.from_pretrained(model_name)
|
75 |
+
|
76 |
+
# Example usage
|
77 |
+
inputs = tokenizer("Your text here", return_tensors="pt")
|
78 |
+
outputs = model(**inputs)
|
79 |
|
80 |
## Training Details
|
81 |
|
82 |
### Training Data
|
83 |
+
First, the model was extended the original tokenizer scaling to handle low resource languages then, The model was fine-tuned using a custom dataset consisting of text and labels in a CSV format. Data includes sentences labeled for binary classification.
|
84 |
|
|
|
|
|
|
|
85 |
|
86 |
### Training Procedure
|
87 |
+
####PreprocessingThe dataset was tokenized using the XLM-RoBERTa tokenizer. The text was padded and truncated to a fixed length of 128 tokens.
|
|
|
|
|
|
|
|
|
|
|
88 |
|
89 |
|
90 |
#### Training Hyperparameters
|
91 |
|
92 |
+
- **Training regime:** Fine-tuned for 3 epochs with a learning rate of 1e-5.
|
93 |
|
94 |
#### Speeds, Sizes, Times [optional]
|
95 |
|
|
|
|
|
|
|
96 |
|
97 |
## Evaluation
|
98 |
|
|
|
102 |
|
103 |
#### Testing Data
|
104 |
|
105 |
+
Evaluated on a separate test dataset using the same preprocessing as the training data.
|
106 |
|
107 |
[More Information Needed]
|
108 |
|
109 |
#### Factors
|
110 |
|
111 |
+
Factors such as text length and class imbalance were considered during evaluation.
|
|
|
112 |
[More Information Needed]
|
113 |
|
114 |
#### Metrics
|
115 |
|
116 |
+
Metrics include accuracy and loss during training and evaluation.
|
|
|
|
|
117 |
|
118 |
### Results
|
119 |
|
|
|
|
|
|
|
|
|
|
|
120 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
121 |
|
122 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
123 |
|
|
|
146 |
[More Information Needed]
|
147 |
|
148 |
## Citation [optional]
|
149 |
+
**BibTeX:**
|
150 |
+
@misc{hailay_ft_exlm,
|
151 |
+
author = {Your Name},
|
152 |
+
title = {Hailay/FT_EXLMR},
|
153 |
+
year = {2024},
|
154 |
+
publisher = {Hugging Face},
|
155 |
+
how published = {\url{https://huggingface.co/Hailay/FT_EXLMR}},
|
156 |
+
}
|
157 |
|
|
|
158 |
|
|
|
159 |
|
160 |
+
Hailay. (2024). *Hailay/FT_EXLMR*. Hugging Face. Retrieved from https://huggingface.co/Hailay/FT_EXLMR
|
161 |
+
|
162 |
|
163 |
**APA:**
|
164 |
|