Commit
•
2f908c4
1
Parent(s):
26c917e
Created README.md (#1)
Browse files- Created README.md (9e6d92c30dccea550a36d2857b2a931d994a6135)
Co-authored-by: Fabian Hofer <Integer-Ctrl@users.noreply.huggingface.co>
README.md
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
license: mit
|
5 |
+
---
|
6 |
+
# tiny-bert-ranker model card
|
7 |
+
|
8 |
+
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://web.archive.org/web/20240315094214/https://huggingface.co/prajjwal1/bert-tiny)
|
9 |
+
as part of our submission to [ReNeuIR 2024](https://web.archive.org/web/20240704171521/https://reneuir.org/shared_task.html).
|
10 |
+
|
11 |
+
## Model Details
|
12 |
+
|
13 |
+
### Model Description
|
14 |
+
|
15 |
+
<!-- Provide a longer summary of what this model is. -->
|
16 |
+
|
17 |
+
The model is based on the pre-trained [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny). It is fine-tuned on a 1GB subset of data
|
18 |
+
extracted from msmarco's [Train Triples Small](https://web.archive.org/web/20231209043304/https://microsoft.github.io/msmarco/Datasets.html).
|
19 |
+
|
20 |
+
Tiny-bert-ranker is part of our investigation into the tradeoffs between efficiency and effectiveness in ranking models.
|
21 |
+
This approach does not involve BM25 score injection or distillation.
|
22 |
+
|
23 |
+
- **Developed by:** Team FSU at ReNeuIR 2024
|
24 |
+
- **Model type:** sequence-to-sequence model
|
25 |
+
- **License:** mit
|
26 |
+
- **Finetuned from model:** prajjwal1/bert-tiny
|