Dataset Preview
Viewer
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Couldn't get the size of external files in `_split_generators` because a request failed: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0) Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).
Error code:   ExternalFilesSizeRequestTimeoutError
Exception:    ReadTimeout
Message:      HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 466, in _make_request
                  six.raise_from(e, None)
                File "<string>", line 3, in raise_from
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 461, in _make_request
                  httplib_response = conn.getresponse()
                File "/usr/local/lib/python3.9/http/client.py", line 1377, in getresponse
                  response.begin()
                File "/usr/local/lib/python3.9/http/client.py", line 320, in begin
                  version, status, reason = self._read_status()
                File "/usr/local/lib/python3.9/http/client.py", line 281, in _read_status
                  line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
                File "/usr/local/lib/python3.9/socket.py", line 704, in readinto
                  return self._sock.recv_into(b)
                File "/usr/local/lib/python3.9/ssl.py", line 1242, in recv_into
                  return self.read(nbytes, buffer)
                File "/usr/local/lib/python3.9/ssl.py", line 1100, in read
                  return self._sslobj.read(len, buffer)
              socket.timeout: The read operation timed out
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
                  resp = conn.urlopen(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 798, in urlopen
                  retries = retries.increment(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 550, in increment
                  raise six.reraise(type(error), error, _stacktrace)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/packages/six.py", line 770, in reraise
                  raise value
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 714, in urlopen
                  httplib_response = self._make_request(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 468, in _make_request
                  self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 357, in _raise_timeout
                  raise ReadTimeoutError(
              urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 488, in _is_too_big_from_external_data_files
                  for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):
                File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next
                  raise value
                File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker
                  result = (True, func(*args, **kwds))
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 386, in _request_size
                  response = http_head(url, headers=headers, max_retries=3)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 429, in http_head
                  response = _request_with_retry(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 328, in _request_with_retry
                  response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
                  return session.request(method=method, url=url, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
                  resp = self.send(prep, **send_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
                  r = adapter.send(request, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 532, in send
                  raise ReadTimeout(e, request=request)
              requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)

Need help to make the dataset viewer work? Open a discussion for direct support.

index
int32
hashtag
string
segmentation
string
alternatives
sequence
0
Typo3
Typo3
{ "segmentation": [ "TYPO3" ] }
1
KolacheFactory
Kolache Factory
{ "segmentation": [] }
2
moanmondays
moan mondays
{ "segmentation": [] }
3
semantic
semantic
{ "segmentation": [] }
4
babypunch
baby punch
{ "segmentation": [] }
5
flossylove
flossy love
{ "segmentation": [] }
6
justsayin
just sayin
{ "segmentation": [] }
7
Logies
Logies
{ "segmentation": [ "logies" ] }
8
football3s
football 3s
{ "segmentation": [] }
9
prosehos
prose hos
{ "segmentation": [ "prosehos" ] }
10
blisters
blisters
{ "segmentation": [] }
11
homebirth
home birth
{ "segmentation": [] }
12
Continental
Continental
{ "segmentation": [] }
13
kentuckyderby
kentucky derby
{ "segmentation": [] }
14
iPhoneForAlonis
iPhone For Alonis
{ "segmentation": [] }
15
gr8conf
gr8conf
{ "segmentation": [ "gr8 conf", "GR8Conf" ] }
16
TWELVE
TWELVE
{ "segmentation": [] }
17
follow
follow
{ "segmentation": [] }
18
nobitchassnessfriday
no bitchassness friday
{ "segmentation": [ "no bitch assness friday", "no bitchass ness friday" ] }
19
openspacecode
openspacecode
{ "segmentation": [ "open space code", "OpenSpace code" ] }
20
thegreenteen
the green teen
{ "segmentation": [] }
21
medifast
medifast
{ "segmentation": [] }
22
Geek
Geek
{ "segmentation": [] }
23
Liverpool
Liverpool
{ "segmentation": [] }
24
phpvikinger
php vikinger
{ "segmentation": [] }
25
xenocide
xenocide
{ "segmentation": [] }
26
campcourage
camp courage
{ "segmentation": [] }
27
stillnotreadyfor100degreeweatherbutluvPhx
still not ready for 100 degree weather but luv Phx
{ "segmentation": [] }
28
McFly
McFly
{ "segmentation": [] }
29
SharePoint
SharePoint
{ "segmentation": [] }
30
LemonTree
Lemon Tree
{ "segmentation": [] }
31
ccnet
ccnet
{ "segmentation": [ "CCNet" ] }
32
hamthrax
hamthrax
{ "segmentation": [ "Hamthrax" ] }
33
PeopleBrowsr
People Browsr
{ "segmentation": [ "PeopleBrowsr", "people browsr" ] }
34
maddie
maddie
{ "segmentation": [] }
35
sota09
sota 09
{ "segmentation": [] }
36
sagepay
sage pay
{ "segmentation": [] }
37
unfollow
unfollow
{ "segmentation": [] }
38
london2009
london 2009
{ "segmentation": [] }
39
charitytuesday
charity tuesday
{ "segmentation": [] }
40
ItsGrimUpNorth
Its Grim Up North
{ "segmentation": [ "It s Grim Up North" ] }
41
elementsc
element sc
{ "segmentation": [] }
42
twares
twares
{ "segmentation": [] }
43
steviewonder
stevie wonder
{ "segmentation": [ "Stevie wonder" ] }
44
fartingloud
farting loud
{ "segmentation": [] }
45
beerfriday
beer friday
{ "segmentation": [] }
46
jonaslive
jonas live
{ "segmentation": [ "Jonas live" ] }
47
vegas
vegas
{ "segmentation": [ "Vegas" ] }
48
appstore
appstore
{ "segmentation": [ "app store" ] }
49
PUGHUG
PUG HUG
{ "segmentation": [ "pug hug" ] }
50
nerdprom
nerd prom
{ "segmentation": [] }
51
otalia
otalia
{ "segmentation": [ "Otalia" ] }
52
girlcrush
girl crush
{ "segmentation": [] }
53
Hunchback
Hunchback
{ "segmentation": [] }
54
trafficza
traffic za
{ "segmentation": [] }
55
writechat
writechat
{ "segmentation": [ "WriteChat", "write chat" ] }
56
codecompletion
code completion
{ "segmentation": [] }
57
magic
magic
{ "segmentation": [] }
58
cartoonavie
cartoon avie
{ "segmentation": [] }
59
HsvTweetup
Hsv Tweetup
{ "segmentation": [] }
60
stc09
stc09
{ "segmentation": [ "stc 09" ] }
61
wishiwasinLA
wish i was in LA
{ "segmentation": [ "wish I was in LA" ] }
62
Prabhakaran
Prabhakaran
{ "segmentation": [] }
63
lions2009
lions 2009
{ "segmentation": [ "Lions 2009" ] }
64
dongle
dongle
{ "segmentation": [] }
65
heartballoon
heart balloon
{ "segmentation": [] }
66
spymaster
spymaster
{ "segmentation": [] }
67
UFC97
UFC97
{ "segmentation": [ "UFC 97" ] }
68
twittermusical
twitter musical
{ "segmentation": [] }
69
COBOL
COBOL
{ "segmentation": [] }
70
familyhistory
family history
{ "segmentation": [] }
71
webdu
webdu
{ "segmentation": [ "webDU", "web du" ] }
72
makerfaire
maker faire
{ "segmentation": [] }
73
sleep
sleep
{ "segmentation": [] }
74
Bruins
Bruins
{ "segmentation": [] }
75
whatcotyisnotdoing
what coty is not doing
{ "segmentation": [ "what COTY is not doing" ] }
76
qanda
q and a
{ "segmentation": [ "Q and A" ] }
77
starwars
star wars
{ "segmentation": [ "Star Wars" ] }
78
auremix
au remix
{ "segmentation": [] }
79
phillycodecamp
philly code camp
{ "segmentation": [] }
80
3wordsaftersex
3 words after sex
{ "segmentation": [] }
81
spoonies
spoonies
{ "segmentation": [] }
82
gfree
gfree
{ "segmentation": [ "g free", "Gfree" ] }
83
bacon
bacon
{ "segmentation": [] }
84
lostthegame
lost the game
{ "segmentation": [] }
85
DRWHO
DR WHO
{ "segmentation": [] }
86
electrohouse
electro house
{ "segmentation": [] }
87
eurovision
eurovision
{ "segmentation": [] }
88
Letmego
Let me go
{ "segmentation": [] }
89
3aaah
3 aaah
{ "segmentation": [] }
90
bck5
bck5
{ "segmentation": [] }
91
MusicMonday
Music Monday
{ "segmentation": [ "music monday" ] }
92
vitaminwater
vitamin water
{ "segmentation": [] }
93
symp09
symp 09
{ "segmentation": [ "symp09", "SYMP 09" ] }
94
dissertation
dissertation
{ "segmentation": [] }
95
Nambu
Nambu
{ "segmentation": [] }
96
asylum
asylum
{ "segmentation": [] }
97
thegreendeath
the green death
{ "segmentation": [] }
98
bittorrent
bit torrent
{ "segmentation": [ "bittorrent", "Bit Torrent" ] }
99
omgimpatient
omg impatient
{ "segmentation": [ "OMG impatient" ] }
End of preview.
YAML Metadata Warning: The task_categories "structure-prediction" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other
YAML Metadata Warning: The task_categories "conditional-text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other

Dataset Card for STAN Small

Dataset Summary

Manually Annotated Stanford Sentiment Analysis Dataset by Bansal et al..

Languages

English

Dataset Structure

Data Instances

{
    "index": 300,
    "hashtag": "microsoftfail",
    "segmentation": "microsoft fail",
    "alternatives": {
        "segmentation": [
            "Microsoft fail"
        ]
    }
}

Data Fields

  • index: a numerical index.
  • hashtag: the original hashtag.
  • segmentation: the gold segmentation for the hashtag.
  • alternatives: other segmentations that are also accepted as a gold segmentation.

Although segmentation has exactly the same characters as hashtag except for the spaces, the segmentations inside alternatives may have characters corrected to uppercase.

Dataset Creation

  • All hashtag segmentation and identifier splitting datasets on this profile have the same basic fields: hashtag and segmentation or identifier and segmentation.

  • The only difference between hashtag and segmentation or between identifier and segmentation are the whitespace characters. Spell checking, expanding abbreviations or correcting characters to uppercase go into other fields.

  • There is always whitespace between an alphanumeric character and a sequence of any special characters ( such as _ , :, ~ ).

  • If there are any annotations for named entity recognition and other token classification tasks, they are given in a spans field.

Additional Information

Citation Information

@misc{bansal2015deep,
      title={Towards Deep Semantic Analysis Of Hashtags}, 
      author={Piyush Bansal and Romil Bansal and Vasudeva Varma},
      year={2015},
      eprint={1501.03210},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}

Contributions

This dataset was added by @ruanchaves while developing the hashformers library.

Downloads last month
8
Edit dataset card