Files changed (1) hide show
  1. README.md +68 -61
README.md CHANGED
@@ -1,61 +1,68 @@
1
- ---
2
- language:
3
- - ja
4
- ---
5
-
6
- # Model Card for `answer-finder.yuzu`
7
-
8
- This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to
9
- the start token and end token of an answer.
10
-
11
- Model name: `answer-finder.yuzu`
12
-
13
- ## Supported Languages
14
-
15
- The model was trained and tested in the following languages:
16
-
17
- - Japanese
18
-
19
- Besides the aforementioned languages, basic support can be expected for the 104 languages that were used during the
20
- pretraining of the base model (See [original repository](https://github.com/google-research/bert)).
21
-
22
- ## Scores
23
-
24
- | Metric | Value |
25
- |:--------------------------------------------------------------|-------:|
26
- | F1 Score on JSQuAD with Hugging Face evaluation pipeline | 92.1 |
27
- | F1 Score on JSQuAD with Haystack evaluation pipeline | 91.5 |
28
-
29
- ## Inference Time
30
-
31
- | GPU | Batch size 1 | Batch size 32 |
32
- |:--------------------------------------------------------------|---------------:|---------------:|
33
- | NVIDIA A10 | 4 ms | 84 ms |
34
- | NVIDIA T4 | 15 ms | 361 ms |
35
-
36
- The inference times only measure the time the model takes to process a single batch, it does not include pre- or
37
- post-processing steps like the tokenization.
38
-
39
- **Note that the Answer Finder models are only used at query time.**
40
-
41
- ## Requirements
42
-
43
- - Minimal Sinequa version: 11.10.0
44
- - GPU memory usage: 1320 MiB
45
-
46
- Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
47
- size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
48
- can be around 0.5 to 1 GiB depending on the used GPU.
49
-
50
- ## Model Details
51
-
52
- ### Overview
53
-
54
- - Number of parameters: 110 million
55
- - Base language model: [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
56
- - Sensitive to casing and accents
57
-
58
- ### Training Data
59
-
60
- - [JSQuAD](https://github.com/yahoojapan/JGLUE) see [Paper](https://aclanthology.org/2022.lrec-1.317.pdf)
61
- - Japanese translation of SQuAD v2 "impossible" query-passage pairs
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ja
4
+ ---
5
+
6
+ # Model Card for `answer-finder.yuzu`
7
+
8
+ This model is a question answering model developed by Sinequa. It produces two lists of logit scores corresponding to the start token and end token of an answer.
9
+
10
+ Model name: `answer-finder.yuzu`
11
+
12
+ ## Supported Languages
13
+
14
+ The model was trained and tested in the following languages:
15
+
16
+ - Japanese
17
+
18
+ Besides the aforementioned languages, basic support can be expected for the 104 languages that were used during the pretraining of the base model (See [original repository](https://github.com/google-research/bert)).
19
+
20
+ ## Scores
21
+
22
+ | Metric | Value |
23
+ |:--------------------------------------------------------------|-------:|
24
+ | F1 Score on JSQuAD with Hugging Face evaluation pipeline | 92.1 |
25
+ | F1 Score on JSQuAD with Haystack evaluation pipeline | 91.5 |
26
+
27
+ ## Inference Time
28
+
29
+ | GPU | Quantization type | Batch size 1 | Batch size 32 |
30
+ |:------------------------------------------|:------------------|---------------:|---------------:|
31
+ | NVIDIA A10 | FP16 | 17 ms | 27 ms |
32
+ | NVIDIA A10 | FP32 | 4 ms | 88 ms |
33
+ | NVIDIA T4 | FP16 | 3 ms | 64 ms |
34
+ | NVIDIA T4 | FP32 | 15 ms | 374 ms |
35
+ | NVIDIA L4 | FP16 | 3 ms | 39 ms |
36
+ | NVIDIA L4 | FP32 | 5 ms | 125 ms |
37
+
38
+ **Note that the Answer Finder models are only used at query time.**
39
+
40
+ ## Gpu Memory usage
41
+
42
+ | Quantization type | Memory |
43
+ |:-------------------------------------------------|-----------:|
44
+ | FP16 | 950 MiB |
45
+ | FP32 | 1350 MiB |
46
+
47
+ Note that GPU memory usage only includes how much GPU memory the actual model consumes on an NVIDIA T4 GPU with a batch
48
+ size of 32. It does not include the fix amount of memory that is consumed by the ONNX Runtime upon initialization which
49
+ can be around 0.5 to 1 GiB depending on the used GPU.
50
+
51
+ ## Requirements
52
+
53
+ - Minimal Sinequa version: 11.10.0
54
+ - Minimal Sinequa version for using FP16 models and GPUs with CUDA compute capability of 8.9+ (like NVIDIA L4): 11.11.0
55
+ - [Cuda compute capability](https://developer.nvidia.com/cuda-gpus): above 5.0 (above 6.0 for FP16 use)
56
+
57
+ ## Model Details
58
+
59
+ ### Overview
60
+
61
+ - Number of parameters: 110 million
62
+ - Base language model: [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)
63
+ - Sensitive to casing and accents
64
+
65
+ ### Training Data
66
+
67
+ - [JSQuAD](https://github.com/yahoojapan/JGLUE) see [Paper](https://aclanthology.org/2022.lrec-1.317.pdf)
68
+ - Japanese translation of SQuAD v2 "impossible" query-passage pairs