davanstrien HF staff commited on
Commit
4990903
1 Parent(s): cf036bf

Update README.md (#2)

Browse files

- Update README.md (e502fe79c0ec32a5a218710d5072d131d5552c12)

Files changed (1) hide show
  1. README.md +19 -10
README.md CHANGED
@@ -17,9 +17,11 @@ source_datasets:
17
  - original
18
  task_categories:
19
  - text-classification
 
20
  task_ids:
21
  - multi-class-classification
22
- - sentiment-classification
 
23
  ---
24
  [Needs More Information]
25
 
@@ -51,7 +53,7 @@ task_ids:
51
  ## Dataset Description
52
 
53
  - **Homepage:** https://www.dhi.ac.uk/projects/old-bailey/
54
- - **Repository:** https://www.dhi.ac.uk/san/data/oldbailey/oldbailey.zip
55
  - **Paper:** [Needs More Information]
56
  - **Leaderboard:** [Needs More Information]
57
  - **Point of Contact:** The University of Sheffield
@@ -61,12 +63,16 @@ Sheffield S3 7QY
61
 
62
  ### Dataset Summary
63
 
 
 
64
  The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).
65
 
66
  ### Supported Tasks and Leaderboards
67
 
 
68
  - `text-classification`: This dataset can be used to classify what style of English some text is in
69
- - `named-entity-recognition`: Some of the text contains names of people and places which can be used for NER in older style texts
 
70
 
71
  ### Languages
72
 
@@ -76,7 +82,9 @@ The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary
76
 
77
  ### Data Instances
78
 
79
- ```
 
 
80
  {
81
  'id': 'OA16760517',
82
  'text': "THE CONFESSION AND EXECUTION Of the Prisoners at TYBURN On Wednesday the 17May1676. Viz. Henry Seabrook , Elizabeth Longman Robert Scot , Condemned the former Sessions. Edward Wall , and Edward Russell . Giving a full
@@ -93,32 +101,33 @@ sad Scene of their deplorable Tragedy. Being come to the Gallows, and the usual
93
 
94
  ### Data Fields
95
 
96
- - `id`: A unique identifier for the data point
97
- - `text`: The text of the data point
98
  - `places`: The places mentioned in the text
99
  - `type`: This can be either 'OA' or 'OBP'. OA is "Ordinary's Accounts" and OBP is "Sessions Proceedings"
100
  - `persons`: The persons named in the text
101
  - `date`: The date of the text
102
 
103
  ### Data Splits
 
104
 
105
- Train: 2638
106
 
107
  ## Dataset Creation
108
 
109
  ### Curation Rationale
110
 
111
- Between 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail some 197,000 individual trials and contain 127 million words recorded in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published, and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.
112
 
113
  ### Source Data
114
 
115
  #### Initial Data Collection and Normalization
116
 
117
- Starting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes, and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.
118
 
119
  #### Who are the source language producers?
120
 
121
- The text of the 1674 to October 1834 Proceedings was manually typed by the process known as "double rekeying", whereby the text is typed in twice, by two different typists, and then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts.
122
 
123
  ### Annotations
124
 
 
17
  - original
18
  task_categories:
19
  - text-classification
20
+ - text-generation
21
  task_ids:
22
  - multi-class-classification
23
+ - language-modeling
24
+ - masked-language-modeling
25
  ---
26
  [Needs More Information]
27
 
 
53
  ## Dataset Description
54
 
55
  - **Homepage:** https://www.dhi.ac.uk/projects/old-bailey/
56
+ - **Repository:** https://www.dhi.ac.uk/san/data/oldbailey/
57
  - **Paper:** [Needs More Information]
58
  - **Leaderboard:** [Needs More Information]
59
  - **Point of Contact:** The University of Sheffield
 
63
 
64
  ### Dataset Summary
65
 
66
+ **Note** We are making this dataset available via the HuggingFace hub to open it up to more users and use cases. We have focused primarily on making an initial version of this dataset available, focusing on some potential use cases. If you think there are other configurations this dataset should support, please use the community tab to open an issue.
67
+
68
  The dataset consists of 2,163 transcriptions of the Proceedings and 475 Ordinary's Accounts marked up in TEI-XML, and contains some documentation covering the data structure and variables. Each Proceedings file represents one session of the court (1674-1913), and each Ordinary's Account file represents a single pamphlet (1676-1772).
69
 
70
  ### Supported Tasks and Leaderboards
71
 
72
+ - `language-modeling`: This dataset can be used to contribute to the training or evaluation of language models for historical texts. Since it represents transcription from court proceedings, the language in this dataset may better represent the variety of language used at the time.
73
  - `text-classification`: This dataset can be used to classify what style of English some text is in
74
+ - `named-entity-recognition`: Some of the text contains names of people and places. We don't currently provide the token IDs for these entities but do provide the tokens themselves. This means this dataset has the potential to be used to evaluate the performance of other Named Entity Recognition models on this dataset.
75
+
76
 
77
  ### Languages
78
 
 
82
 
83
  ### Data Instances
84
 
85
+ An example of one instance from the dataset:
86
+
87
+ ```python
88
  {
89
  'id': 'OA16760517',
90
  'text': "THE CONFESSION AND EXECUTION Of the Prisoners at TYBURN On Wednesday the 17May1676. Viz. Henry Seabrook , Elizabeth Longman Robert Scot , Condemned the former Sessions. Edward Wall , and Edward Russell . Giving a full
 
101
 
102
  ### Data Fields
103
 
104
+ - `id`: A unique identifier for the data point (in this case, a trial)
105
+ - `text`: The text of the proceeding
106
  - `places`: The places mentioned in the text
107
  - `type`: This can be either 'OA' or 'OBP'. OA is "Ordinary's Accounts" and OBP is "Sessions Proceedings"
108
  - `persons`: The persons named in the text
109
  - `date`: The date of the text
110
 
111
  ### Data Splits
112
+ This dataset only contains a single split:
113
 
114
+ Train: `2638` examples
115
 
116
  ## Dataset Creation
117
 
118
  ### Curation Rationale
119
 
120
+ Between 1674 and 1913 the Proceedings of the Central Criminal Court in London, the Old Bailey, were published eight times a year. These records detail 197,000 individual trials and contain 127 million words in 182,000 pages. They represent the largest single source of information about non-elite lives and behaviour ever published and provide a wealth of detail about everyday life, as well as valuable systematic evidence of the circumstances surrounding the crimes and lives of victims and the accused, and their trial outcomes. This project created a fully digitised and structured version of all surviving published trial accounts between 1674 and 1913, and made them available as a searchable online resource.
121
 
122
  ### Source Data
123
 
124
  #### Initial Data Collection and Normalization
125
 
126
+ Starting with microfilms of the original Proceedings and Ordinary's Accounts, page images were scanned to create high definition, 400dpi TIFF files, from which GIF and JPEG files have been created for transmission over the internet. The uncompressed TIFF files will be preserved for archival purposes and should eventually be accessible over the web once data transmission speeds improve. A GIF format has been used to transmit image files for the Proceedings published between 1674 and 1834.
127
 
128
  #### Who are the source language producers?
129
 
130
+ The text of the 1674 to October 1834 Proceedings was manually typed by the process known as "double rekeying", whereby the text is typed in twice, by two different typists. Then the two transcriptions are compared by computer. Differences are identified and then resolved manually. This process was also used to create a transcription of the Ordinary's Accounts. This process means this text data contains fewer errors than many historical text corpora produced using Optical Character Recognition.
131
 
132
  ### Annotations
133