EzraAragon
commited on
Commit
·
183589d
1
Parent(s):
2c37784
Upload README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ tags:
|
|
11 |
|
12 |
[DisorBERT](https://aclanthology.org/2023.acl-long.853/) We propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders.
|
13 |
|
14 |
-
We follow the standard fine-tuning a masked language model of [Huggingface’s
|
15 |
|
16 |
We used the models provided by HuggingFace v4.24.0, and Pytorch v1.13.0. In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of $1e^{-5}$, and cross-entropy as a loss function. We trained the models for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2.
|
17 |
|
|
|
11 |
|
12 |
[DisorBERT](https://aclanthology.org/2023.acl-long.853/) We propose a double-domain adaptation of a language model. First, we adapted the model to social media language, and then, we adapted it to the mental health domain. In both steps, we incorporated a lexical resource to guide the masking process of the language model and, therefore, to help it in paying more attention to words related to mental disorders.
|
13 |
|
14 |
+
We follow the standard fine-tuning a masked language model of [Huggingface’s NLP Course](https://huggingface.co/learn/nlp-course/chapter7/3?fw=pt).
|
15 |
|
16 |
We used the models provided by HuggingFace v4.24.0, and Pytorch v1.13.0. In particular, for training the model we used a batch size of 256, Adam optimizer, with a learning rate of $1e^{-5}$, and cross-entropy as a loss function. We trained the models for three epochs using a GPU NVIDIA Tesla V100 32GB SXM2.
|
17 |
|