andrewrreed HF staff commited on
Commit
42529e5
1 Parent(s): 9fec679

Upload model

Browse files
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: span-marker
3
+ tags:
4
+ - span-marker
5
+ - token-classification
6
+ - ner
7
+ - named-entity-recognition
8
+ - generated_from_span_marker_trainer
9
+ metrics:
10
+ - precision
11
+ - recall
12
+ - f1
13
+ widget: []
14
+ pipeline_tag: token-classification
15
+ ---
16
+
17
+ # SpanMarker
18
+
19
+ This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that can be used for Named Entity Recognition.
20
+
21
+ ## Model Details
22
+
23
+ ### Model Description
24
+ - **Model Type:** SpanMarker
25
+ <!-- - **Encoder:** [Unknown](https://huggingface.co/unknown) -->
26
+ - **Maximum Sequence Length:** 512 tokens
27
+ - **Maximum Entity Length:** 8 words
28
+ <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
29
+ <!-- - **Language:** Unknown -->
30
+ <!-- - **License:** Unknown -->
31
+
32
+ ### Model Sources
33
+
34
+ - **Repository:** [SpanMarker on GitHub](https://github.com/tomaarsen/SpanMarkerNER)
35
+ - **Thesis:** [SpanMarker For Named Entity Recognition](https://raw.githubusercontent.com/tomaarsen/SpanMarkerNER/main/thesis.pdf)
36
+
37
+ ## Uses
38
+
39
+ ### Direct Use for Inference
40
+
41
+ ```python
42
+ from span_marker import SpanMarkerModel
43
+
44
+ # Download from the 🤗 Hub
45
+ model = SpanMarkerModel.from_pretrained("span_marker_model_id")
46
+ # Run inference
47
+ entities = model.predict("None")
48
+ ```
49
+
50
+ ### Downstream Use
51
+ You can finetune this model on your own dataset.
52
+
53
+ <details><summary>Click to expand</summary>
54
+
55
+ ```python
56
+ from span_marker import SpanMarkerModel, Trainer
57
+
58
+ # Download from the 🤗 Hub
59
+ model = SpanMarkerModel.from_pretrained("span_marker_model_id")
60
+
61
+ # Specify a Dataset with "tokens" and "ner_tag" columns
62
+ dataset = load_dataset("conll2003") # For example CoNLL2003
63
+
64
+ # Initialize a Trainer using the pretrained model & dataset
65
+ trainer = Trainer(
66
+ model=model,
67
+ train_dataset=dataset["train"],
68
+ eval_dataset=dataset["validation"],
69
+ )
70
+ trainer.train()
71
+ trainer.save_model("span_marker_model_id-finetuned")
72
+ ```
73
+ </details>
74
+
75
+ <!--
76
+ ### Out-of-Scope Use
77
+
78
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
79
+ -->
80
+
81
+ <!--
82
+ ## Bias, Risks and Limitations
83
+
84
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
85
+ -->
86
+
87
+ <!--
88
+ ### Recommendations
89
+
90
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
91
+ -->
92
+
93
+ ## Training Details
94
+
95
+ ### Framework Versions
96
+ - Python: 3.10.9
97
+ - SpanMarker: 1.5.0
98
+ - Transformers: 4.36.2
99
+ - PyTorch: 2.1.2+cu121
100
+ - Datasets: 2.16.1
101
+ - Tokenizers: 0.15.0
102
+
103
+ ## Citation
104
+
105
+ ### BibTeX
106
+ ```
107
+ @software{Aarsen_SpanMarker,
108
+ author = {Aarsen, Tom},
109
+ license = {Apache-2.0},
110
+ title = {{SpanMarker for Named Entity Recognition}},
111
+ url = {https://github.com/tomaarsen/SpanMarkerNER}
112
+ }
113
+ ```
114
+
115
+ <!--
116
+ ## Glossary
117
+
118
+ *Clearly define terms in order to be accessible across audiences.*
119
+ -->
120
+
121
+ <!--
122
+ ## Model Card Authors
123
+
124
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
125
+ -->
126
+
127
+ <!--
128
+ ## Model Card Contact
129
+
130
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
131
+ -->
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<end>": 50266,
3
+ "<start>": 50265
4
+ }
config.json ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/andrewrreed/span-marker-roberta-base-person-names-augmented/checkpoint-final",
3
+ "architectures": [
4
+ "SpanMarkerModel"
5
+ ],
6
+ "encoder": {
7
+ "_name_or_path": "roberta-base",
8
+ "add_cross_attention": false,
9
+ "architectures": [
10
+ "RobertaForMaskedLM"
11
+ ],
12
+ "attention_probs_dropout_prob": 0.1,
13
+ "bad_words_ids": null,
14
+ "begin_suppress_tokens": null,
15
+ "bos_token_id": 0,
16
+ "chunk_size_feed_forward": 0,
17
+ "classifier_dropout": null,
18
+ "cross_attention_hidden_size": null,
19
+ "decoder_start_token_id": null,
20
+ "diversity_penalty": 0.0,
21
+ "do_sample": false,
22
+ "early_stopping": false,
23
+ "encoder_no_repeat_ngram_size": 0,
24
+ "eos_token_id": 2,
25
+ "exponential_decay_length_penalty": null,
26
+ "finetuning_task": null,
27
+ "forced_bos_token_id": null,
28
+ "forced_eos_token_id": null,
29
+ "hidden_act": "gelu",
30
+ "hidden_dropout_prob": 0.1,
31
+ "hidden_size": 768,
32
+ "id2label": {
33
+ "0": "O",
34
+ "1": "B-PER",
35
+ "2": "I-PER"
36
+ },
37
+ "initializer_range": 0.02,
38
+ "intermediate_size": 3072,
39
+ "is_decoder": false,
40
+ "is_encoder_decoder": false,
41
+ "label2id": {
42
+ "B-PER": 1,
43
+ "I-PER": 2,
44
+ "O": 0
45
+ },
46
+ "layer_norm_eps": 1e-05,
47
+ "length_penalty": 1.0,
48
+ "max_length": 20,
49
+ "max_position_embeddings": 514,
50
+ "min_length": 0,
51
+ "model_type": "roberta",
52
+ "no_repeat_ngram_size": 0,
53
+ "num_attention_heads": 12,
54
+ "num_beam_groups": 1,
55
+ "num_beams": 1,
56
+ "num_hidden_layers": 12,
57
+ "num_return_sequences": 1,
58
+ "output_attentions": false,
59
+ "output_hidden_states": false,
60
+ "output_scores": false,
61
+ "pad_token_id": 1,
62
+ "position_embedding_type": "absolute",
63
+ "prefix": null,
64
+ "problem_type": null,
65
+ "pruned_heads": {},
66
+ "remove_invalid_values": false,
67
+ "repetition_penalty": 1.0,
68
+ "return_dict": true,
69
+ "return_dict_in_generate": false,
70
+ "sep_token_id": null,
71
+ "suppress_tokens": null,
72
+ "task_specific_params": null,
73
+ "temperature": 1.0,
74
+ "tf_legacy_loss": false,
75
+ "tie_encoder_decoder": false,
76
+ "tie_word_embeddings": true,
77
+ "tokenizer_class": null,
78
+ "top_k": 50,
79
+ "top_p": 1.0,
80
+ "torch_dtype": null,
81
+ "torchscript": false,
82
+ "transformers_version": "4.36.2",
83
+ "type_vocab_size": 1,
84
+ "typical_p": 1.0,
85
+ "use_bfloat16": false,
86
+ "use_cache": true,
87
+ "vocab_size": 50272
88
+ },
89
+ "entity_max_length": 8,
90
+ "id2label": {
91
+ "0": "O",
92
+ "1": "PER"
93
+ },
94
+ "id2reduced_id": {
95
+ "0": 0,
96
+ "1": 1,
97
+ "2": 1
98
+ },
99
+ "label2id": {
100
+ "O": 0,
101
+ "PER": 1
102
+ },
103
+ "marker_max_length": 128,
104
+ "max_next_context": null,
105
+ "max_prev_context": null,
106
+ "model_max_length": 512,
107
+ "model_max_length_default": 512,
108
+ "model_type": "span-marker",
109
+ "span_marker_version": "1.5.0",
110
+ "torch_dtype": "float32",
111
+ "trained_with_document_context": false,
112
+ "transformers_version": "4.36.2",
113
+ "vocab_size": 50272
114
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2177b9d6d5289d45d1069e3977bca097bd78d0ccdf4a22c30138244e8d9596dd
3
+ size 498640448
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "50265": {
45
+ "content": "<start>",
46
+ "lstrip": false,
47
+ "normalized": false,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "50266": {
53
+ "content": "<end>",
54
+ "lstrip": false,
55
+ "normalized": false,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": true
59
+ }
60
+ },
61
+ "bos_token": "<s>",
62
+ "clean_up_tokenization_spaces": true,
63
+ "cls_token": "<s>",
64
+ "eos_token": "</s>",
65
+ "errors": "replace",
66
+ "mask_token": "<mask>",
67
+ "model_max_length": 512,
68
+ "pad_token": "<pad>",
69
+ "sep_token": "</s>",
70
+ "tokenizer_class": "RobertaTokenizer",
71
+ "trim_offsets": true,
72
+ "unk_token": "<unk>"
73
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff