DCU-NLP commited on
Commit
63f70ee
1 Parent(s): 4e635b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -14
README.md CHANGED
@@ -4,30 +4,23 @@ tags:
4
  model-index:
5
  - name: bert-base-irish-cased-v1
6
  results: []
 
 
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information Keras had access to. You should
10
- probably proofread and complete it, then remove this comment. -->
11
 
12
  # bert-base-irish-cased-v1
13
 
14
- This model was trained from scratch on an unknown dataset.
15
- It achieves the following results on the evaluation set:
16
-
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
  ## Intended uses & limitations
23
 
24
- More information needed
25
-
26
- ## Training and evaluation data
27
 
28
- More information needed
29
-
30
- ## Training procedure
31
 
32
  ### Training hyperparameters
33
 
@@ -35,8 +28,6 @@ The following hyperparameters were used during training:
35
  - optimizer: None
36
  - training_precision: float32
37
 
38
- ### Training results
39
-
40
 
41
 
42
  ### Framework versions
 
4
  model-index:
5
  - name: bert-base-irish-cased-v1
6
  results: []
7
+ widget:
8
+ - text: "Ceoltóir [MASK] ab ea Johnny Cash."
9
  ---
10
 
 
 
11
 
12
  # bert-base-irish-cased-v1
13
 
14
+ [gaBERT](https://arxiv.org/abs/2107.12930) is a BERT-base model trained on 7.9M Irish sentences. For more details, including the hyperparameters and pretraining corpora used please refer to our paper.
 
 
15
 
16
  ## Model description
17
 
18
+ Encoder-based Transformer to be used to obtain features for finetuning for downstream tasks in Irish.
19
 
20
  ## Intended uses & limitations
21
 
22
+ Some data used to pretrain gaBERT was scraped from the web which potentially contains ethically problematic text (bias, hate, adult content, etc.). Consequently, downstream tasks/applications using gaBERT should be thoroughly tested with respect to ethical considerations.
 
 
23
 
 
 
 
24
 
25
  ### Training hyperparameters
26
 
 
28
  - optimizer: None
29
  - training_precision: float32
30
 
 
 
31
 
32
 
33
  ### Framework versions