Update README.md
Browse files
README.md
CHANGED
@@ -4,22 +4,23 @@ license: mit
|
|
4 |
---
|
5 |
## TwiBERT
|
6 |
## Model Description
|
7 |
-
TwiBERT is a language model
|
8 |
-
|
9 |
-
was trained
|
|
|
10 |
|
11 |
|
12 |
|
13 |
## Limitations:
|
14 |
|
15 |
-
The model was trained on a
|
16 |
-
to learn
|
17 |
-
|
18 |
|
19 |
|
20 |
## How to use it
|
21 |
|
22 |
-
You can
|
23 |
The example code below illustrates how you can use the TwiBERT model on a downtream task:
|
24 |
|
25 |
```python
|
|
|
4 |
---
|
5 |
## TwiBERT
|
6 |
## Model Description
|
7 |
+
TwiBERT is a pre-trained language model specifically designed for the Twi language, which is widely spoken in Ghana,
|
8 |
+
West Africa. This model boasts 61 million parameters, 6 attention heads, 768 hidden units, and a feed-forward size of 3072.
|
9 |
+
To optimize its performance, TwiBERT was trained using a combination of the Asanti Twi Bible and a dataset
|
10 |
+
sourced through crowdsourcing efforts.
|
11 |
|
12 |
|
13 |
|
14 |
## Limitations:
|
15 |
|
16 |
+
The model was trained on a relatively limited dataset (approximately 5MB),
|
17 |
+
which may hinder its ability to learn intricate contextual embeddings and effectively generalize.
|
18 |
+
Additionally, the dataset's focus on the Bible could potentially introduce a strong religious bias in the model's output.
|
19 |
|
20 |
|
21 |
## How to use it
|
22 |
|
23 |
+
You can use TwiBERT by finetuning it on a downtream task.
|
24 |
The example code below illustrates how you can use the TwiBERT model on a downtream task:
|
25 |
|
26 |
```python
|