Dataset
stringlengths
4
28
Left
int64
201
6.93k
Right
int64
10
1.16k
Matches
int64
10
1.16k
Amphibian
3,663
1,161
1,161
ArtificialSatellite
1,801
72
72
Artwork
3,112
245
245
Award
3,380
384
384
BasketballTeam
928
166
166
Case
2,474
380
380
ChristianBishop
5,363
494
494
ClericalAdministrativeRegion
2,547
190
190
Country
2,791
291
291
Device
6,933
658
658
Drug
5,356
157
157
Election
6,565
727
727
Enzyme
3,917
48
48
EthnicGroup
4,317
946
946
FootballLeagueSeason
4,457
280
280
FootballMatch
1,999
53
53
Galaxy
555
17
17
GivenName
3,021
154
154
GovernmentAgency
3,977
571
571
HistoricBuilding
5,064
512
512
Hospital
2,424
257
257
Legislature
1,314
216
216
Magazine
4,005
274
274
MemberOfParliament
5,774
503
503
Monarch
2,033
242
242
MotorsportSeason
1,465
388
388
Museum
3,982
305
305
NCAATeamSeason
5,619
34
34
NationalFootballLeagueSeason
3,003
10
10
NaturalEvent
970
51
51
Noble
3,609
364
364
PoliticalParty
5,254
495
495
Race
2,382
175
175
RailwayLine
2,189
298
298
Reptile
666
819
562
RugbyLeague
418
58
58
ShoppingMall
201
227
159
SoccerClubSeason
1,197
51
51
SoccerLeague
1,315
238
238
SoccerTournament
2,714
290
290
Song
5,726
440
440
SportFacility
6,392
672
672
SportsLeague
3,106
481
481
Stadium
5,105
619
619
TelevisionStation
6,752
1,152
1,152
TennisTournament
324
27
27
Tournament
4,858
459
459
UnitOfWork
2,483
380
380
Venue
4,079
384
384
Wrestler
3,150
464
464

PEARL-Benchmark: A benchmark for evaluating phrase representations

Learning High-Quality and General-Purpose Phrase Representations.
Lihu Chen, Gaël Varoquaux, Fabian M. Suchanek. Accepted by EACL Findings 2024

Our PEARL Benchmark contains 9 phrase-level datasets of five types of tasks, which cover both the field of data science and natural language processing.

Description

  • Paraphrase Classification: PPDB and PPDBfiltered (Wang et al., 2021)
  • Phrase Similarity: Turney (Turney, 2012) and BIRD (Asaadi et al., 2019)
  • Entity Retrieval: We constructed two datasets based on Yago (Pellissier Tanon et al., 2020) and UMLS (Bodenreider, 2004)
  • Entity Clustering: CoNLL 03 (Tjong Kim Sang, 2002) and BC5CDR (Li et al., 2016)
  • Fuzzy Join: AutoFJ benchmark (Li et al., 2021), which contains 50 diverse fuzzy-join datasets
    - PPDB PPDB filtered Turney BIRD YAGO UMLS CoNLL BC5CDR AutoFJ
    Task Paraphrase Classification Paraphrase Classification Phrase Similarity Phrase Similarity Entity Retrieval Entity Retrieval Entity Clustering Entity Clustering Fuzzy Join
    Samples 23.4k 15.5k 2.2k 3.4k 10k 10k 5.0k 9.7k 50 subsets
    Averaged Length 2.5 2.0 1.2 1.7 3.3 4.1 1.5 1.4 3.8
    Metric Acc Acc Acc Pearson Top-1 Acc Top-1 Acc NMI NMI Acc

Usage

from datasets import load_dataset
turney_dataset = load_dataset("Lihuchen/pearl_benchmark", "turney", split="test")

Evaluation

We offer a python script to evaluate your model: eval.py

python eval.py -batch_size 32

Citation

@article{chen2024learning,
  title={Learning High-Quality and General-Purpose Phrase Representations},
  author={Chen, Lihu and Varoquaux, Ga{\"e}l and Suchanek, Fabian M},
  journal={arXiv preprint arXiv:2401.10407},
  year={2024}
}
Downloads last month
0
Edit dataset card