LuLu0630 commited on
Commit
be69fb4
1 Parent(s): a2a741f

Training completed

Browse files
README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilbert-base-uncased
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - emotion
8
+ model-index:
9
+ - name: distilbert-base-uncased-finetuned-emotion
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # distilbert-base-uncased-finetuned-emotion
17
+
18
+ This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 2e-05
38
+ - train_batch_size: 64
39
+ - eval_batch_size: 64
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: linear
43
+ - num_epochs: 2
44
+
45
+ ### Framework versions
46
+
47
+ - Transformers 4.35.0
48
+ - Pytorch 2.1.0+cu118
49
+ - Datasets 2.14.6
50
+ - Tokenizers 0.14.1
ch3.ipynb CHANGED
@@ -11,7 +11,12 @@
11
  "source": [
12
  "from transformers import AutoTokenizer\n",
13
  "from bertviz.transformer_neuron_view import BertModel\n",
14
- "from bert"
 
 
 
 
 
15
  ]
16
  }
17
  ],
 
11
  "source": [
12
  "from transformers import AutoTokenizer\n",
13
  "from bertviz.transformer_neuron_view import BertModel\n",
14
+ "from bertviz.neuron_view import show\n",
15
+ "\n",
16
+ "model_ckpt = \"bert_base_uncased\"\n",
17
+ "tokenizer = AutoTokenizer.from_pretrained(model_ckpt)\n",
18
+ "model = BertModel.from_pretrained(model_ckpt)\n",
19
+ "text = \"ti\""
20
  ]
21
  }
22
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1ceef46a215abb377f986e44002bbb6de026f2906f5f770fc520767891c6beeb
3
  size 267844872
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2147ad82e6d8e00b7604b104ee59cf2be064e386c4115adaf693819b02f4c7f0
3
  size 267844872
runs/Nov09_12-22-46_DESKTOP-13CQUUB/events.out.tfevents.1699503767.DESKTOP-13CQUUB.22876.1 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e76436c20fe32122ba33af04b9751b70c3660fc800cd8487abfbb6b503a65d72
3
- size 4808
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a12af2e97376e4fe7ff36f76fbb099261e02ecf05950264f67609ff96a56cbb2
3
+ size 5531
runs/Nov09_12-22-46_DESKTOP-13CQUUB/events.out.tfevents.1699514771.DESKTOP-13CQUUB.22876.2 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:938988015e80ee487aee76aff80fc14e141efd38049a4d779ea9b17cc3996824
3
+ size 4530