aychang commited on
Commit
71e070f
1 Parent(s): 0c543e1

Add model card

Browse files
Files changed (1) hide show
  1. README.md +99 -0
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ thumbnail:
5
+ tags:
6
+ - text-classification
7
+ license: mit
8
+ datasets:
9
+ - trec
10
+ metrics:
11
+ ---
12
+
13
+ # bert-base-cased trained on TREC 6-class task
14
+
15
+ ## Model description
16
+
17
+ A simple base BERT model trained on the "trec" dataset.
18
+
19
+ ## Intended uses & limitations
20
+
21
+ #### How to use
22
+
23
+ ##### Transformers
24
+
25
+ ```python
26
+ # Load model and tokenizer
27
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
28
+
29
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
30
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
31
+
32
+ # Use pipeline
33
+ from transformers import pipeline
34
+
35
+ model_name = "aychang/bert-base-cased-trec-coarse"
36
+
37
+ nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
38
+
39
+ results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
40
+ ```
41
+
42
+ ##### AdaptNLP
43
+
44
+ ```python
45
+ from adaptnlp import EasySequenceClassifier
46
+
47
+ model_name = "aychang/bert-base-cased-trec-coarse"
48
+ texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
49
+
50
+ classifer = EasySequenceClassifier
51
+ results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
52
+ ```
53
+
54
+ #### Limitations and bias
55
+
56
+ This is minimal language model trained on a benchmark dataset.
57
+
58
+ ## Training data
59
+
60
+ TREC https://huggingface.co/datasets/trec
61
+
62
+ ## Training procedure
63
+
64
+ Preprocessing, hardware used, hyperparameters...
65
+ #### Hardware
66
+ One V100
67
+
68
+ #### Hyperparameters and Training Args
69
+ ```python
70
+ from transformers import TrainingArguments
71
+
72
+ training_args = TrainingArguments(
73
+ output_dir='./models',
74
+ num_train_epochs=2,
75
+ per_device_train_batch_size=16,
76
+ per_device_eval_batch_size=16,
77
+ warmup_steps=500,
78
+ weight_decay=0.01,
79
+ evaluation_strategy="steps",
80
+ logging_dir='./logs',
81
+ save_steps=3000
82
+ )
83
+ ```
84
+
85
+ ## Eval results
86
+
87
+ ```
88
+ {'epoch': 2.0,
89
+ 'eval_accuracy': 0.974,
90
+ 'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708,
91
+ 0.98159509]),
92
+ 'eval_loss': 0.138086199760437,
93
+ 'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667,
94
+ 0.97560976]),
95
+ 'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. ,
96
+ 0.98765432]),
97
+ 'eval_runtime': 1.6132,
98
+ 'eval_samples_per_second': 309.943}
99
+ ```