RichardErkhov commited on
Commit
f883e7f
1 Parent(s): 65ffeea

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ roberta-base-squad-v1 - bnb 4bits
11
+ - Model creator: https://huggingface.co/csarron/
12
+ - Original model: https://huggingface.co/csarron/roberta-base-squad-v1/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ language: en
20
+ thumbnail:
21
+ license: mit
22
+ tags:
23
+ - question-answering
24
+ - roberta
25
+ - roberta-base
26
+ datasets:
27
+ - squad
28
+ metrics:
29
+ - squad
30
+ widget:
31
+ - text: "Which name is also used to describe the Amazon rainforest in English?"
32
+ context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
33
+ - text: "How many square kilometers of rainforest is covered in the basin?"
34
+ context: "The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain \"Amazonas\" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species."
35
+ ---
36
+
37
+ ## RoBERTa-base fine-tuned on SQuAD v1
38
+
39
+ This model was fine-tuned from the HuggingFace [RoBERTa](https://arxiv.org/abs/1907.11692) base checkpoint on [SQuAD1.1](https://rajpurkar.github.io/SQuAD-explorer).
40
+ This model is case-sensitive: it makes a difference between english and English.
41
+
42
+ ## Details
43
+
44
+ | Dataset | Split | # samples |
45
+ | -------- | ----- | --------- |
46
+ | SQuAD1.1 | train | 96.8K |
47
+ | SQuAD1.1 | eval | 11.8k |
48
+
49
+
50
+ ### Fine-tuning
51
+ - Python: `3.7.5`
52
+
53
+ - Machine specs:
54
+
55
+ `CPU: Intel(R) Core(TM) i7-6800K CPU @ 3.40GHz`
56
+
57
+ `Memory: 32 GiB`
58
+
59
+ `GPUs: 2 GeForce GTX 1070, each with 8GiB memory`
60
+
61
+ `GPU driver: 418.87.01, CUDA: 10.1`
62
+
63
+ - script:
64
+
65
+ ```shell
66
+ # after install https://github.com/huggingface/transformers
67
+
68
+ cd examples/question-answering
69
+ mkdir -p data
70
+
71
+ wget -O data/train-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json
72
+
73
+ wget -O data/dev-v1.1.json https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json
74
+
75
+ python run_energy_squad.py \
76
+ --model_type roberta \
77
+ --model_name_or_path roberta-base \
78
+ --do_train \
79
+ --do_eval \
80
+ --train_file train-v1.1.json \
81
+ --predict_file dev-v1.1.json \
82
+ --per_gpu_train_batch_size 12 \
83
+ --per_gpu_eval_batch_size 16 \
84
+ --learning_rate 3e-5 \
85
+ --num_train_epochs 2.0 \
86
+ --max_seq_length 320 \
87
+ --doc_stride 128 \
88
+ --data_dir data \
89
+ --output_dir data/roberta-base-squad-v1 2>&1 | tee train-roberta-base-squad-v1.log
90
+ ```
91
+
92
+ It took about 2 hours to finish.
93
+
94
+ ### Results
95
+
96
+ **Model size**: `477M`
97
+
98
+ | Metric | # Value |
99
+ | ------ | --------- |
100
+ | **EM** | **83.0** |
101
+ | **F1** | **90.4** |
102
+
103
+ Note that the above results didn't involve any hyperparameter search.
104
+
105
+ ## Example Usage
106
+
107
+
108
+ ```python
109
+ from transformers import pipeline
110
+
111
+ qa_pipeline = pipeline(
112
+ "question-answering",
113
+ model="csarron/roberta-base-squad-v1",
114
+ tokenizer="csarron/roberta-base-squad-v1"
115
+ )
116
+
117
+ predictions = qa_pipeline({
118
+ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.",
119
+ 'question': "What day was the game played on?"
120
+ })
121
+
122
+ print(predictions)
123
+ # output:
124
+ # {'score': 0.8625259399414062, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'}
125
+ ```
126
+
127
+ > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp)
128
+
129
+ > Made with ❤️ in New York.
130
+
131
+