query-id
int64
1M
1.02M
corpus-id
int64
44
600k
score
int64
0
5
1,000,005
598,395
5
1,000,005
146,843
4
1,000,005
578,472
2
1,000,005
520,490
2
1,000,005
576,721
0
1,000,005
574,728
0
1,000,005
517,004
0
1,000,005
271,618
0
1,000,005
208,360
0
1,000,005
146,848
0
1,000,005
146,851
0
1,000,006
598,395
5
1,000,006
574,740
0
1,000,006
487,131
0
1,000,006
578,414
0
1,000,006
258,709
0
1,000,006
580,232
0
1,000,006
578,472
0
1,000,006
264,886
0
1,000,006
588,153
0
1,000,006
301,148
0
1,000,006
593,690
0
1,000,007
598,395
5
1,000,007
539,643
0
1,000,007
580,231
0
1,000,007
593,414
0
1,000,007
532,240
0
1,000,007
592,577
0
1,000,007
590,705
0
1,000,007
307,901
0
1,000,007
487,591
0
1,000,007
566,894
0
1,000,007
307,770
0
1,000,008
598,395
5
1,000,008
580,293
4
1,000,008
487,267
2
1,000,008
513,580
0
1,000,008
509,234
0
1,000,008
487,263
0
1,000,008
487,264
0
1,000,008
487,265
0
1,000,008
541,743
0
1,000,008
64,696
0
1,000,008
548,041
0
1,000,009
598,395
5
1,000,009
577,747
4
1,000,009
536,986
4
1,000,009
487,374
2
1,000,009
145,895
0
1,000,009
563,736
0
1,000,009
510,563
0
1,000,009
541,219
0
1,000,009
593,581
0
1,000,009
581,891
0
1,000,009
487,371
0
1,000,010
598,400
5
1,000,010
153,501
0
1,000,010
93,387
0
1,000,010
447,577
0
1,000,010
501,701
0
1,000,010
501,703
0
1,000,010
423,120
0
1,000,010
593,079
0
1,000,010
296,600
0
1,000,010
425,858
0
1,000,010
447,545
0
1,000,011
598,400
5
1,000,011
451,269
0
1,000,011
24,802
0
1,000,011
117,576
0
1,000,011
279,990
0
1,000,011
223,472
0
1,000,011
95,911
0
1,000,011
225,069
0
1,000,011
163,686
0
1,000,011
283,706
0
1,000,011
223,473
0
1,000,012
598,400
5
1,000,012
87,881
0
1,000,012
447,565
0
1,000,012
542,495
0
1,000,012
572,528
0
1,000,012
510,945
0
1,000,012
444,826
0
1,000,012
444,925
0
1,000,012
447,601
0
1,000,012
296,600
0
1,000,012
564,624
0
1,000,013
598,403
5
1,000,013
41,488
2
1,000,013
8,316
0
1,000,013
228,305
0
1,000,013
2,835
0
1,000,013
159,626
0
1,000,013
159,627
0
1,000,013
224,974
0
1,000,013
159,597
0
1,000,013
190,508
0
1,000,013
124,495
0
1,000,014
598,403
5

Dataset Card for retrieval-skquad

Dataset Summary

STS SK-QuAD Retrieval is a unique dataset designed to evaluate Slovak search performance using metrics like MRR, MAP, and NDCG, derived from the SK-QuAD dataset. It features questions and answers sourced from a search engine before annotation. The annotated data assigns categories to the best answers for each question, enhancing Slovak language search evaluation. This dataset is a significant step forward in the development of Slovak language search evaluation and provides a valuable resource for further research and development in this area.

Languages

Slovak

Dataset Structure

The dataset follows strucure recommended by BEIR toolkit.

corpus.jsonl : contains a list of dictionaries, each with three fields _id with unique document identifier, title of document and text of a paragraph.

For example:

{"_id": "598395",
 "title": "Vysoký grúň (Laborecká vrchovina)",
 "text": "Cez vrch Vysoký grúň vedie hlavná  červená turistická značka, ktorá zároveň vedie po hlavnom karpatskom hrebeni cez najvýchodnejší bod Slovenska – trojmedzie (1207.7 Mnm) na vrchu Kremenec (1221.0 Mnm) a prechádza po slovensko-poľskej štátnej hranici cez viacero vrchov s viacerými panoramatickými vyhliadkami, ako napr. Kamenná lúka (1200.9 Mnm), Jarabá skala (1199.0 Mnm), Ďurkovec (1188.7 Mnm), Pľaša (1162.8 Mnm), ďalej cez Ruské sedlo (801.0 Mnm), vrchy Rypy (1002.7 Mnm), Strop, (1011.2 Mnm), Černiny (929.4 Mnm), Laborecký priesmyk (684.0 Mnm) až k Duklianskemu priesmyku (502.0 Mnm)."}

queries.jsonl : contains a list of dictionaries, each with two fields _id with unique query identifier and text with query text. For example: {"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}

For example:

{"_id": "1000005",
 "text": "Akú nadmorskú výšku má vrch Kremenec ?"
}

qrels/test.tsv : a .tsv file (tab-seperated) that contains three columns, the query-id, corpus-id and score in this order.

For example:

# query-id corpus-id score
1000005 598395 5
1000005 576721 0
1000005 576728 0
1000005 146843 4
1000005 520490 2

Scores of the answers are based on the annotators decisions:

  • 5 and 4: paragraph contains relevant answer
  • 2 : paragraph is partially relevant
  • 0 : paragraphs is no relevant

Evaluation of an embedding model

For evaluation of an embedding model with this dataset, you can use HF datasets and BEIR toolit:

Example of evaluation of a model:

from beir import util, LoggingHandler
from beir.retrieval import models
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES
from huggingface_hub import snapshot_download
import logging
import pathlib, os

#### Just some code to print debug information to stdout
logging.basicConfig(format='%(asctime)s - %(message)s',
                    datefmt='%Y-%m-%d %H:%M:%S',
                    level=logging.INFO,
                    handlers=[LoggingHandler()])
#
data_path  = snapshot_download(repo_id="TUKE-KEMT/retrieval-skquad",repo_type="dataset")
model_path = "TUKE-DeutscheTelekom/slovakbert-skquad-mnlr"

model = DRES(models.SentenceBERT(model_path), batch_size=16)

corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")

#### Load the SBERT model and retrieve using cosine-similarity
retriever = EvaluateRetrieval(model, score_function="dot") # or "cos_sim" for cosine similarity
results = retriever.retrieve(corpus, queries)

#### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K  where k = [1,3,5,10,100,1000]
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)

Database Content

Number of Questions Total Answers
945 19845
Correct Answers Count of Questions
2 466
3 250
4 119
5 60
6 20
7 12
8 11
9 4
14 1
19 1
20 1
Total 945

Dataset Creation

Curation Rationale

The curation rationale for this dataset stemmed from the necessity to evaluate search performance in the Slovak language context. By selecting questions from the SK-QuAD dataset and annotating them with relevant answers obtained from a search engine, the dataset aims to provide a standardized benchmark for assessing Slovak language search effectiveness.

Source Data

Initial Data Collection and Normalization

Initial data collection and normalization involved selecting questions from the first manually annotated dataset, SK-QuAD. Only corresponding questions were chosen to ensure relevance and consistency in the dataset. This process useful to maintain the quality of the data for subsequent evaluation.

Who are the source language producers?

The creator is a student from the Department of Electronics and Multimedia Telecommunications (KEMT) on Faculty of Electrical Engineering and Informatics (FEI TUKE) of the Technical University of Košice (TUKE). The dataset was developed as part of the student's master's thesis titled Semantic Search in Slovak Text.

Annotations

Annotation process

The annotation process involved sourcing questions and their corresponding answers from the SK-QuAD dataset. Before annotation, answers to each question were obtained using a semantic search with model slovakbert-skquad-mnlr. During annotation, the best answers were identified and categorized based on relevance.

There are relevant categories:

  • Category 0: Answers in this category were deemed irrelevant or overlooked during the annotation process, indicating a lack of alignment with the query or inadequacy in addressing the question's intent.
  • Category 1: Representing the highest level of relevance, answers categorized under this label were sourced directly from the SK-QuAD dataset and were verified to be accurate and comprehensive responses to the questions.
  • Category 2: Answers classified as Category 2 exhibited direct relevance to the posed questions, providing informative and pertinent information that effectively addressed the query's scope.
  • Category 3: Answers falling into Category 3 demonstrated a degree of relevance to the questions but were considered weakly relevant. These responses may contain some relevant information but might lack precision or comprehensiveness in addressing the query.
  • Category 4: In contrast, Category 4 encompassed answers marked by evaluators as not relevant to the questions. These responses failed to provide meaningful or accurate information, indicating a disconnect from the query's intent or context.

By categorizing answers based on their relevancy levels, the annotation process aimed to ensure the dataset's quality and utility for evaluating search performance accurately in the Slovak language context. These relevancy categories facilitate nuanced analysis and interpretation of search results, enabling comprehensive assessments of search effectiveness and providing valuable insights for further research and development in the field of information retrieval and natural language processing.

Who are the annotators?

Students from Faculty of Electrical Engineering and Informatics Technical University of Košice.

Personal and Sensitive Information

Considerations for Using the Data

In the dataset, Slovak Wikipedia includes a wealth of information about various individuals, including famous personalities, as well as groups or organizations. It's important to handle this information with care, ensuring compliance with ethical standards and privacy regulations when analyzing or processing data related to individuals or groups.

Social Impact of Dataset

This dataset will contribute significantly to enhancing Slovakian search engines by providing valuable insights and data for evaluation purposes. It has the potential to improve the efficiency and relevance of search results in Slovak or multilanguage texts.

Additional Information

Dataset Curators

Technical University of Košice

Licensing Information

license: cc-by-nc-sa-4.0

Citation Information

[Needs More Information]

Downloads last month
0
Edit dataset card