|
--- |
|
thumbnail: https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true |
|
tags: |
|
- kaggle |
|
- rembert |
|
- pytorch |
|
- question-answering |
|
language: |
|
- multilingual |
|
- hi |
|
- ta |
|
license: cc0-1.0 |
|
inference: false |
|
datasets: |
|
- Commonlit-Readibility |
|
|
|
--- |
|
<div align = "center"> |
|
<img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> |
|
</div> |
|
|
|
This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- |
|
|
|
| Huggingface Hub Link | Public LB Score | |
|
| :---: | :---: | |
|
| [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | |
|
| [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | |
|
| [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | |
|
| [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 | |
|
|