varun-v-rao commited on
Commit
298f71d
1 Parent(s): 24a71b9

End of training

Browse files
README.md CHANGED
@@ -1,42 +1,54 @@
1
  ---
 
 
2
  tags:
3
- - t5
4
- - adapter-transformers
5
  datasets:
6
- - squad
 
 
 
7
  ---
8
 
9
- # Adapter `varun-v-rao/t5-base-bn-adapter-1.79M-squad-model1` for t5-base
 
10
 
11
- An [adapter](https://adapterhub.ml) for the `t5-base` model that was trained on the [squad](https://huggingface.co/datasets/squad/) dataset.
12
 
13
- This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
14
 
15
- ## Usage
16
 
17
- First, install `adapters`:
18
 
19
- ```
20
- pip install -U adapters
21
- ```
22
 
23
- Now, the adapter can be loaded and activated like this:
24
 
25
- ```python
26
- from adapters import AutoAdapterModel
27
 
28
- model = AutoAdapterModel.from_pretrained("t5-base")
29
- adapter_name = model.load_adapter("varun-v-rao/t5-base-bn-adapter-1.79M-squad-model1", source="hf", set_active=True)
30
- ```
31
 
32
- ## Architecture & Training
33
 
34
- <!-- Add some description here -->
35
 
36
- ## Evaluation results
 
 
 
 
 
 
 
37
 
38
- <!-- Add some description here -->
39
 
40
- ## Citation
41
 
42
- <!-- Add some description here -->
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ base_model: t5-base
4
  tags:
5
+ - generated_from_trainer
 
6
  datasets:
7
+ - varun-v-rao/squad
8
+ model-index:
9
+ - name: t5-base-bn-adapter-1.79M-squad-model1
10
+ results: []
11
  ---
12
 
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
 
16
+ # t5-base-bn-adapter-1.79M-squad-model1
17
 
18
+ This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad dataset.
19
 
20
+ ## Model description
21
 
22
+ More information needed
23
 
24
+ ## Intended uses & limitations
 
 
25
 
26
+ More information needed
27
 
28
+ ## Training and evaluation data
 
29
 
30
+ More information needed
 
 
31
 
32
+ ## Training procedure
33
 
34
+ ### Training hyperparameters
35
 
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 16
39
+ - eval_batch_size: 4
40
+ - seed: 90
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 3
44
 
45
+ ### Training results
46
 
 
47
 
48
+
49
+ ### Framework versions
50
+
51
+ - Transformers 4.35.2
52
+ - Pytorch 2.1.1+cu121
53
+ - Datasets 2.15.0
54
+ - Tokenizers 0.15.0
runs/Feb09_13-59-30_gl1528.arc-ts.umich.edu/events.out.tfevents.1707505176.gl1528.arc-ts.umich.edu.3444676.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:be0a672be52f332fbc8fef8d69cb584757f747a31fcb86d021bcceccaeac97e5
3
- size 8801
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2bdad1b26f900586bc1c1f19eef77f483c614143f049277e9706e4daee2fcda2
3
+ size 10882
squad/pytorch_adapter.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f931d48604113bd27047150746df8c044e0eb8d3f4f874f021f5236b18149974
3
  size 7190294
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f84bd8362f4247c1498f53d97279075be91bd76e08b8cf4444ab820c54b7cac8
3
  size 7190294
squad/pytorch_model_head.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9fe58adf70c611870fe6c04db450a5ec7a5c39fa4226eab57abb0f43d499a5c6
3
  size 7706
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53e18a8e9c698b154803c694700c9f55302494a6307dd4b41be93f38b200fbca
3
  size 7706