
Datasets:
The dataset viewer is not available for this split.
Error code: ExternalFilesSizeRequestTimeoutError Exception: ReadTimeout Message: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 466, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 461, in _make_request httplib_response = conn.getresponse() File "/usr/local/lib/python3.9/http/client.py", line 1377, in getresponse response.begin() File "/usr/local/lib/python3.9/http/client.py", line 320, in begin version, status, reason = self._read_status() File "/usr/local/lib/python3.9/http/client.py", line 281, in _read_status line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") File "/usr/local/lib/python3.9/socket.py", line 704, in readinto return self._sock.recv_into(b) File "/usr/local/lib/python3.9/ssl.py", line 1242, in recv_into return self.read(nbytes, buffer) File "/usr/local/lib/python3.9/ssl.py", line 1100, in read return self._sslobj.read(len, buffer) socket.timeout: The read operation timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 798, in urlopen retries = retries.increment( File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/packages/six.py", line 770, in reraise raise value File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 714, in urlopen httplib_response = self._make_request( File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 468, in _make_request self._raise_timeout(err=e, url=url, timeout_value=read_timeout) File "/src/services/worker/.venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 357, in _raise_timeout raise ReadTimeoutError( urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 488, in _is_too_big_from_external_data_files for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)): File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next raise value File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 386, in _request_size response = http_head(url, headers=headers, max_retries=3) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 429, in http_head response = _request_with_retry( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 328, in _request_with_retry response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 725, in send history = [resp for resp in gen] File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 725, in <listcomp> history = [resp for resp in gen] File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 266, in resolve_redirects resp = self.send( File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 532, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=10.0)
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Card for ItaCoLA
Dataset Summary
The Italian Corpus of Linguistic Acceptability includes almost 10k sentences taken from linguistic literature with a binary annotation made by the original authors themselves. The work is inspired by the English Corpus of Linguistic Acceptability.
Disclaimer: The ItaCoLA corpus is hosted on Github by the Digital Humanities group at FBK. It was introduced in the article Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus by Daniela Trotta, Raffaele Guarasci, Elisa Leonardelli, Sara Tonelli
Supported Tasks and Leaderboards
Acceptability Classification
The following table is taken from Table 4 of the original paper, where an LSTM and a BERT model pretrained on the Italian languages are fine-tuned on the train
split of the corpus and evaluated respectively on the test
split (In-domain, in
) and on the acceptability portion of the [AcCompl-it] corpus (Out-of-domain, out
). Models are evaluated with accuracy (Acc.) and Matthews Correlation Coefficient (MCC) in both settings. Results are averaged over 10 runs with ±stdev. error bounds.
in , Acc. |
in , MCC |
out , Acc. |
out , MCC |
|
---|---|---|---|---|
LSTM |
0.794 | 0.278 ± 0.029 | 0.605 | 0.147 ± 0.066 |
ITA-BERT |
0.904 | 0.603 ± 0.022 | 0.683 | 0.198 ± 0.036 |
Languages
The language data in ItaCoLA is in Italian (BCP-47 it
)
Dataset Structure
Data Instances
Scores Configuration
The scores
configuration contains sentences with acceptability judgments. An example from the train
split of the scores
config (default) is provided below.
{
"unique_id": 1,
"source": "Graffi_1994",
"acceptability": 1,
"sentence": "Quest'uomo mi ha colpito."
}
The text is provided as-is, without further preprocessing or tokenization.
The fields are the following:
unique_id
: Unique identifier for the sentence across configurations.source
: Original source for the sentence.acceptability
: Binary score, 1 = acceptable, 0 = not acceptable.sentence
: The evaluated sentence.
Phenomena Configuration
The phenomena
configuration contains a sample of sentences from scores
that has been manually annotated to denote the presence of 9 linguistic phenomena. An example from the train
split is provided below:
{
"unique_id": 1,
"source": "Graffi_1994",
"acceptability": 1,
"sentence": "Quest'uomo mi ha colpito.",
"cleft_construction": 0,
"copular_construction": 0,
"subject_verb_agreement": 1,
"wh_islands_violations": 0,
"simple": 0,
"question": 0,
"auxiliary": 1,
"bind": 0,
"indefinite_pronouns": 0
}
For each one of the new fields, the value of the binary score denotes the presence (1) or the absence (0) of the respective phenomenon. Refer to the original paper for a detailed description of each phenomenon.
Data Splits
config | train | test |
---|---|---|
scores |
7801 | 975 |
phenomena |
2088 | - |
Dataset Creation
Please refer to the original article Monolingual and Cross-Lingual Acceptability Judgments with the Italian CoLA corpus for additional information on dataset creation.
Additional Information
Dataset Curators
The authors are the curators of the original dataset. For problems or updates on this 🤗 Datasets version, please contact gabriele.sarti996@gmail.com.
Licensing Information
No licensing information available.
Citation Information
Please cite the authors if you use these corpora in your work:
@inproceedings{trotta-etal-2021-monolingual-cross,
title = "Monolingual and Cross-Lingual Acceptability Judgments with the {I}talian {C}o{LA} corpus",
author = "Trotta, Daniela and
Guarasci, Raffaele and
Leonardelli, Elisa and
Tonelli, Sara",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
month = nov,
year = "2021",
address = "Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-emnlp.250",
doi = "10.18653/v1/2021.findings-emnlp.250",
pages = "2929--2940"
}
- Downloads last month
- 231
Models trained or fine-tuned on gsarti/itacola
