UMCU commited on
Commit
9ef3e7c
1 Parent(s): 6fc8968

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -37,7 +37,7 @@ results = [{'token': input_tokens[idx],
37
  It is perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
38
 
39
  ## Intended use
40
- The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated.
41
 
42
  ## Data
43
  The pre-trained model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.
 
37
  It is perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
38
 
39
  ## Intended use
40
+ The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated. Note that we also trained a biLSTM which can be incorporated in [MedCAT](https://github.com/CogStack/MedCAT).
41
 
42
  ## Data
43
  The pre-trained model was trained on nearly 10 million hospital notes from the Amsterdam University Medical Centres. The training data was anonymized before starting the pre-training procedure.