File size: 1,996 Bytes
71893f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
"""HSE Russian dataset by Glushkova et al.."""

import datasets
import pandas as pd
from functools import reduce

_CITATION = """
@article{glushkova2019char,
  title={Char-RNN and Active Learning for Hashtag Segmentation},
  author={Glushkova, Taisiya and Artemova, Ekaterina},
  journal={arXiv preprint arXiv:1911.03270},
  year={2019}
}
"""

_DESCRIPTION = """
2000 real hashtags collected from several pages about civil services on vk.com (a Russian social network) 
and then segmented manually.
"""
_URL = "https://raw.githubusercontent.com/glushkovato/hashtag_segmentation/master/data/test_rus.csv"


class HSE(datasets.GeneratorBasedBuilder):

    VERSION = datasets.Version("1.0.0")

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            features=datasets.Features(
                {
                    "index": datasets.Value("int32"),
                    "hashtag": datasets.Value("string"),
                    "segmentation": datasets.Value("string")
                }
            ),
            supervised_keys=None,
            homepage="https://github.com/glushkovato/hashtag_segmentation",
            citation=_CITATION,
        )

    def _split_generators(self, dl_manager):
        downloaded_files = dl_manager.download(_URL)
        return [
            datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files }),
        ]

    def _generate_examples(self, filepath):

        df = pd.read_csv(filepath)
        records = df.to_dict("records")

        def get_segmentation(a, b):
            return "".join(reduce(lambda x,y: x + y, list(zip(a,b)))).replace("0","").replace("1"," ").strip()

        for idx, row in enumerate(records):
            yield idx, {
                "index": idx,
                "hashtag": row["hashtag"],
                "segmentation": get_segmentation(
                    row["hashtag"],
                    row["true_segmentation"]
                )}