Datasets:
Alvin commited on
Commit ·
75d5ab7
1
Parent(s): f304b7a
Add Linear B (Mycenaean Greek) dataset
Browse filesBuild structured Linear B dataset from 4 CC-BY-SA compatible sources:
- Unicode UCD: 211 signs (88 syllabograms + 123 ideograms)
- jhnwnstd/shannon: 2,272 words from Linear B Lexicon (MIT)
- Wiktionary gmy: 170 words with 46 expert IPA transcriptions
- IE-CoR: 42 words with expert IPA and Concepticon IDs
Output: 2,484 words with IPA, SCA encoding, glosses, source
attribution. Sign inventory with 74 Ventris grid IPA mappings.
IPA mapping follows Ventris & Chadwick (1973) and Hooker (1980).
Adversarial audit passed all 6 checks.
- .gitignore +3 -0
- README.md +16 -0
- data/linear_b/README.md +88 -0
- data/linear_b/linear_b_signs.tsv +3 -0
- data/linear_b/linear_b_words.tsv +3 -0
- data/linear_b/sign_to_ipa.json +3 -0
- data/training/raw/linear_b/shannon_Linear_B_Lexicon.csv +3 -0
- data/training/raw/linear_b/wiktionary_gmy_lemmas.json +3 -0
- data/training/raw/linear_b/wiktionary_gmy_swadesh.json +3 -0
- docs/changelog/008_linear_b_dataset.md +134 -0
- docs/changelog/INDEX.md +1 -0
- scripts/build_linear_b_dataset.py +786 -0
- scripts/ingest_linear_b.py +280 -0
.gitignore
CHANGED
|
@@ -9,4 +9,7 @@ sources/
|
|
| 9 |
# Training data (too large for git, regenerated from scripts)
|
| 10 |
data/training/lexicons/
|
| 11 |
data/training/cognate_pairs/
|
|
|
|
|
|
|
|
|
|
| 12 |
# data/training/validation/ — now tracked via Git LFS
|
|
|
|
| 9 |
# Training data (too large for git, regenerated from scripts)
|
| 10 |
data/training/lexicons/
|
| 11 |
data/training/cognate_pairs/
|
| 12 |
+
|
| 13 |
+
# Large raw downloads (regenerated by ingest scripts)
|
| 14 |
+
data/training/raw/linear_b/UnicodeData.txt
|
| 15 |
# data/training/validation/ — now tracked via Git LFS
|
README.md
CHANGED
|
@@ -82,6 +82,12 @@ data/
|
|
| 82 |
│ ├── gothic_religious.tsv # ~65 Gothic Bible religious terms
|
| 83 |
│ └── iberian_religious.tsv # ~40 Iberian votive/religious elements
|
| 84 |
│
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
├── validation/ # Phylogenetic validation dataset (9 branches)
|
| 86 |
│ ├── README.md # Format, sources, concept list
|
| 87 |
│ ├── concepts.tsv # 40 shared concept IDs
|
|
@@ -138,6 +144,16 @@ The Gothic Bible is the primary source of Gothic text. The paper uses unsegmente
|
|
| 138 |
|
| 139 |
**Format:** CSV with columns `REF. HESPERIA` (inscription reference code) and `cleaned` (transcribed text). Contains 3,466 undersegmented character chunks from the 6th-1st century BC. Sourced from the [Hesperia database](http://hesperia.ucm.es/en/proyecto_hesperia.php) and cleaned via the authors' Jupyter notebook.
|
| 140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
### Cited Sources (`data/cited_sources/`)
|
| 142 |
|
| 143 |
These are external datasets referenced in the paper for known-language vocabularies and comparison:
|
|
|
|
| 82 |
│ ├── gothic_religious.tsv # ~65 Gothic Bible religious terms
|
| 83 |
│ └── iberian_religious.tsv # ~40 Iberian votive/religious elements
|
| 84 |
│
|
| 85 |
+
├── linear_b/ # Linear B (Mycenaean Greek) dataset
|
| 86 |
+
│ ├── README.md # Sources, methodology, limitations
|
| 87 |
+
│ ├── linear_b_signs.tsv # 211 signs (88 syllabograms + 123 ideograms)
|
| 88 |
+
│ ├── sign_to_ipa.json # 74 syllabogram → IPA mappings
|
| 89 |
+
│ └── linear_b_words.tsv # 2,484 words with IPA, glosses, sources
|
| 90 |
+
│
|
| 91 |
├── validation/ # Phylogenetic validation dataset (9 branches)
|
| 92 |
│ ├── README.md # Format, sources, concept list
|
| 93 |
│ ├── concepts.tsv # 40 shared concept IDs
|
|
|
|
| 144 |
|
| 145 |
**Format:** CSV with columns `REF. HESPERIA` (inscription reference code) and `cleaned` (transcribed text). Contains 3,466 undersegmented character chunks from the 6th-1st century BC. Sourced from the [Hesperia database](http://hesperia.ucm.es/en/proyecto_hesperia.php) and cleaned via the authors' Jupyter notebook.
|
| 146 |
|
| 147 |
+
### Linear B / Mycenaean Greek (`data/linear_b/`)
|
| 148 |
+
|
| 149 |
+
| File | Source | Description |
|
| 150 |
+
|---|---|---|
|
| 151 |
+
| `linear_b_signs.tsv` | [Unicode UCD](https://www.unicode.org/Public/UCD/latest/) | 211 signs: 88 syllabograms + 123 ideograms with Bennett numbers and IPA |
|
| 152 |
+
| `sign_to_ipa.json` | Ventris & Chadwick (1973) | 74 syllabogram transliteration → IPA mappings |
|
| 153 |
+
| `linear_b_words.tsv` | Multiple (see below) | 2,484 words with IPA, glosses, and source attribution |
|
| 154 |
+
|
| 155 |
+
**Format:** Tab-separated values. The word list contains columns: `Word` (transliteration), `IPA`, `SCA` (sound class), `Source`, `Concept_ID`, `Cognate_Set_ID`, `Gloss`, `Word_Type`, `IPA_Source`. Words come from three CC-BY-SA compatible sources: [jhnwnstd/shannon](https://github.com/jhnwnstd/shannon) Linear B Lexicon (MIT, 2,272 entries), [Wiktionary](https://en.wiktionary.org/wiki/Category:Mycenaean_Greek_lemmas) Mycenaean Greek lemmas (CC-BY-SA, 170 entries with 46 expert IPA), and IE-CoR cognate pairs (42 entries). The sign inventory is from the Unicode Character Database.
|
| 156 |
+
|
| 157 |
### Cited Sources (`data/cited_sources/`)
|
| 158 |
|
| 159 |
These are external datasets referenced in the paper for known-language vocabularies and comparison:
|
data/linear_b/README.md
ADDED
|
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Linear B (Mycenaean Greek) Dataset
|
| 2 |
+
|
| 3 |
+
Structured dataset for Linear B inscriptions and Mycenaean Greek vocabulary, compiled from CC-BY-SA compatible open sources.
|
| 4 |
+
|
| 5 |
+
## Files
|
| 6 |
+
|
| 7 |
+
- `linear_b_signs.tsv` — Full sign inventory (211 signs: 88 syllabograms + 123 ideograms)
|
| 8 |
+
- `sign_to_ipa.json` — 74 syllabogram transliterations mapped to IPA values
|
| 9 |
+
- `linear_b_words.tsv` — 2,484 words with transliterations, IPA, glosses, and source attribution
|
| 10 |
+
- `README.md` — This file
|
| 11 |
+
|
| 12 |
+
## Sign Inventory (`linear_b_signs.tsv`)
|
| 13 |
+
|
| 14 |
+
Columns: `Codepoint`, `Unicode_Char`, `Bennett_Number`, `Name`, `Type`, `Transliteration`, `IPA`
|
| 15 |
+
|
| 16 |
+
- **88 syllabograms** (U+10000-U+1007F): 74 with confirmed phonetic values, 14 undeciphered symbols
|
| 17 |
+
- **123 ideograms** (U+10080-U+100FF): logograms for commodities, animals, vessels, etc.
|
| 18 |
+
- Bennett numbers follow the standard Mycenological numbering system
|
| 19 |
+
- Source: Unicode Character Database (Unicode Terms license, permissive)
|
| 20 |
+
|
| 21 |
+
## Sign-to-IPA Mapping (`sign_to_ipa.json`)
|
| 22 |
+
|
| 23 |
+
74 syllabogram transliterations mapped to IPA phonetic values based on the Ventris decipherment (1952) and CIPEM standard conventions.
|
| 24 |
+
|
| 25 |
+
Key systematic mappings (Reference: Ventris & Chadwick 1973, Hooker 1980):
|
| 26 |
+
- q-series → labiovelars /kʷ/ (e.g., `qa` → `kʷa`)
|
| 27 |
+
- z-series → affricates /ts/ (e.g., `za` → `tsa`)
|
| 28 |
+
- j → palatal glide /j/
|
| 29 |
+
- w → labio-velar glide /w/
|
| 30 |
+
- r-series covers both /r/ and /l/ (Linear B does not distinguish)
|
| 31 |
+
- Variant signs: `a2` → `ha` (aspiration), `pu2` → `pʰu` (aspirated), `ra2` → `rja` (palatalized)
|
| 32 |
+
|
| 33 |
+
## Word List (`linear_b_words.tsv`)
|
| 34 |
+
|
| 35 |
+
Columns: `Word`, `IPA`, `SCA`, `Source`, `Concept_ID`, `Cognate_Set_ID`, `Gloss`, `Word_Type`, `IPA_Source`
|
| 36 |
+
|
| 37 |
+
### Statistics
|
| 38 |
+
|
| 39 |
+
| Metric | Count |
|
| 40 |
+
|--------|-------|
|
| 41 |
+
| Total words | 2,484 |
|
| 42 |
+
| Common nouns | 680 |
|
| 43 |
+
| Anthroponyms (personal names) | 1,285 |
|
| 44 |
+
| Toponyms (place names) | 209 |
|
| 45 |
+
| Unknown meaning | ~250 |
|
| 46 |
+
| Theonyms (deity names) | ~27 |
|
| 47 |
+
| Expert IPA (Wiktionary/IE-CoR) | 77 |
|
| 48 |
+
| Transliteration-derived IPA | 2,407 |
|
| 49 |
+
|
| 50 |
+
### IPA Sources
|
| 51 |
+
|
| 52 |
+
IPA transcriptions come from three tiers of quality:
|
| 53 |
+
1. **Expert** (77 words): Scholarly reconstructions from Wiktionary `ts=` parameter and IE-CoR cognate pair data. Examples: `a-ke-ro` → `áŋɡelos`, `a-ku-ro` → `árguros`
|
| 54 |
+
2. **Transliteration conversion** (2,407 words): Systematic application of the Ventris grid IPA mapping to each syllable in the transliteration
|
| 55 |
+
3. **IE-CoR cognate data** (42 words): 42 Mycenaean Greek words from the IE-CoR (Indo-European Cognate Relations) database with expert IPA reconstructions, linked to Concepticon concept IDs
|
| 56 |
+
|
| 57 |
+
### Word Types
|
| 58 |
+
|
| 59 |
+
- `common`: Words with identifiable meanings (nouns, verbs, adjectives)
|
| 60 |
+
- `anthroponym`: Personal names attested on Linear B tablets
|
| 61 |
+
- `toponym`: Place names (Knossos administrative records)
|
| 62 |
+
- `theonym`: Deity names (e.g., di-wo = Zeus, po-se-da-o = Poseidon)
|
| 63 |
+
- `ethnic`: Ethnic/geographic adjectives
|
| 64 |
+
- `unknown`: Words of unknown or uncertain meaning
|
| 65 |
+
|
| 66 |
+
## Sources
|
| 67 |
+
|
| 68 |
+
| Source | License | Entries | Description |
|
| 69 |
+
|--------|---------|---------|-------------|
|
| 70 |
+
| Unicode UCD | Unicode Terms (permissive) | 211 signs | Definitive sign inventory |
|
| 71 |
+
| jhnwnstd/shannon | MIT | 2,272 words | Linear B Lexicon based on Chadwick & Ventris 1973 |
|
| 72 |
+
| Wiktionary (gmy) | CC-BY-SA-3.0+ | 170 words | Mycenaean Greek lemmas with IPA and etymologies |
|
| 73 |
+
| IE-CoR | CC-BY-4.0 | 42 words | Expert Mycenaean Greek cognate pairs |
|
| 74 |
+
|
| 75 |
+
## Academic References
|
| 76 |
+
|
| 77 |
+
- Ventris, M. & Chadwick, J. (1973). *Documents in Mycenaean Greek*. 2nd edition. Cambridge University Press.
|
| 78 |
+
- Hooker, J.T. (1980). *Linear B: An Introduction*. Bristol Classical Press. (IPA mapping for z-series, p.68)
|
| 79 |
+
- Palmer, L.R. (1963). *The Interpretation of Mycenaean Greek Texts*. Oxford University Press.
|
| 80 |
+
- The Unicode Consortium. *The Unicode Standard*, Chapter 8: "Linear B Syllabary" (U+10000-U+1007F).
|
| 81 |
+
|
| 82 |
+
## Limitations
|
| 83 |
+
|
| 84 |
+
1. **IPA quality varies**: Only 77 words have expert IPA reconstructions. The remaining 2,407 use systematic transliteration conversion, which captures the syllabic structure but not phonological details like aspiration, vowel length, or accent.
|
| 85 |
+
2. **Glosses are noisy**: The Shannon lexicon definitions contain scholarly concordance notes, not clean dictionary glosses. Parsing extracts the primary meaning but some entries retain bibliographic noise.
|
| 86 |
+
3. **Name-heavy**: 73% of entries are proper nouns (names, places), reflecting the administrative nature of Linear B tablets (palace inventories, tax records). Only 27% are common vocabulary.
|
| 87 |
+
4. **No tablet corpus**: The dataset includes individual words but not full tablet texts. DAMOS (CC-BY-NC-SA-4.0) and LiBER (non-profit only) provide tablet corpora but are license-incompatible.
|
| 88 |
+
5. **r/l merger**: Linear B uses a single r-series for both /r/ and /l/. The IPA mapping uses /r/ throughout, but the actual Mycenaean pronunciation distinguished the two.
|
data/linear_b/linear_b_signs.tsv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:fce94eb4d2c79a84237e90add4ac605762db1e6d678b7ad3770b02ca09a8d943
|
| 3 |
+
size 12875
|
data/linear_b/linear_b_words.tsv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d1151d4b9a04b3b3e26b8f5726511b7064b5e495d417c508e20e4abc40c1a3d2
|
| 3 |
+
size 267875
|
data/linear_b/sign_to_ipa.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d888d7976fa90e68256655e4e4a36283380730b8b57a8f35990cd2f0be6df6fc
|
| 3 |
+
size 1134
|
data/training/raw/linear_b/shannon_Linear_B_Lexicon.csv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2900ac8b178e145d903dd5bbf1c6bcee0217c6b64615f004cdb4695eaf7da033
|
| 3 |
+
size 619170
|
data/training/raw/linear_b/wiktionary_gmy_lemmas.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:eb4a69c4dde43018530265151b595c6c2c96918ace321ccf82c8a4963ac4f1e0
|
| 3 |
+
size 178486
|
data/training/raw/linear_b/wiktionary_gmy_swadesh.json
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d83b86b8baab1b7e7845ab4c6756adc30096244c187ae300d9c528ffb813ca23
|
| 3 |
+
size 197
|
docs/changelog/008_linear_b_dataset.md
ADDED
|
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 008 — Linear B (Mycenaean Greek) Dataset
|
| 2 |
+
|
| 3 |
+
**Date**: 2026-03-19
|
| 4 |
+
|
| 5 |
+
## Objective
|
| 6 |
+
|
| 7 |
+
Build a structured Linear B dataset to complement the existing Linear A dataset. Linear B (Mycenaean Greek, ISO 639-3: `gmy`) is the deciphered ancestor of the Linear A script system, making it essential for validation of Linear A decipherment models.
|
| 8 |
+
|
| 9 |
+
## Scripts Used
|
| 10 |
+
|
| 11 |
+
| Script | Lines | Purpose |
|
| 12 |
+
|--------|------:|---------|
|
| 13 |
+
| `scripts/ingest_linear_b.py` | ~200 | Downloads raw data from 4 CC-BY-SA compatible sources |
|
| 14 |
+
| `scripts/build_linear_b_dataset.py` | ~600 | Parses, transforms, and merges all sources into final dataset |
|
| 15 |
+
|
| 16 |
+
**Data integrity**: All data extracted from external sources via HTTP downloads or API calls. No data was hardcoded, invented, or generated from LLM knowledge.
|
| 17 |
+
|
| 18 |
+
## Data Sources
|
| 19 |
+
|
| 20 |
+
| Source | URL | License | Entries | Usage |
|
| 21 |
+
|--------|-----|---------|---------|-------|
|
| 22 |
+
| Unicode UCD | unicode.org/Public/UCD/latest/ | Unicode Terms (permissive) | 211 signs | Sign inventory |
|
| 23 |
+
| jhnwnstd/shannon | github.com/jhnwnstd/shannon | MIT | 2,746 lexicon entries | Word list + glosses |
|
| 24 |
+
| Wiktionary (gmy) | en.wiktionary.org | CC-BY-SA-3.0+ | 435 lemmas (263 with syllabic content) | IPA transcriptions + glosses |
|
| 25 |
+
| IE-CoR | (already in dataset) | CC-BY-4.0 | 42 words | Expert IPA reconstructions |
|
| 26 |
+
|
| 27 |
+
**License incompatible sources excluded**: DAMOS (CC-BY-NC-SA-4.0), LiBER (non-profit only).
|
| 28 |
+
|
| 29 |
+
### Source Reputability
|
| 30 |
+
|
| 31 |
+
- **Unicode Consortium**: Definitive authority for character encoding. Linear B sign assignments based on Ventris-Chadwick decipherment and CIPEM conventions.
|
| 32 |
+
- **jhnwnstd/shannon**: MIT-licensed lexicon derived from Chadwick & Ventris (1973) "Documents in Mycenaean Greek" — the standard reference work for Linear B. Scholarly concordance with 2,746 entries.
|
| 33 |
+
- **Wiktionary**: Community-edited but Mycenaean Greek entries cite peer-reviewed sources (PIE reconstructions, Ventris-Chadwick, Palmer). The `ts=` parameter provides 46 expert phonetic transcriptions.
|
| 34 |
+
- **IE-CoR**: Indo-European Cognate Relations database (CC-BY-4.0). Expert-curated cognate pairs with reconstructed IPA for 42 Mycenaean Greek words.
|
| 35 |
+
|
| 36 |
+
## Methodology
|
| 37 |
+
|
| 38 |
+
### Sign Inventory
|
| 39 |
+
|
| 40 |
+
Parsed UnicodeData.txt for codepoints U+10000-U+100FF (Linear B Syllabary + Ideograms):
|
| 41 |
+
- Extracted Bennett number and phonetic value from Unicode character names
|
| 42 |
+
- Format: `LINEAR B SYLLABLE B{NNN} {PHONETIC_VALUE}`
|
| 43 |
+
- 74 of 88 syllabograms have confirmed phonetic readings
|
| 44 |
+
- 14 symbols (B018, B019, etc.) are undeciphered — IPA marked as "-"
|
| 45 |
+
|
| 46 |
+
### IPA Mapping
|
| 47 |
+
|
| 48 |
+
The `sign_to_ipa.json` maps 74 syllabogram transliterations to IPA values.
|
| 49 |
+
|
| 50 |
+
Reference: Ventris & Chadwick (1973) "Documents in Mycenaean Greek", 2nd ed.; Hooker (1980) "Linear B: An Introduction"
|
| 51 |
+
|
| 52 |
+
Key systematic correspondences:
|
| 53 |
+
| Transliteration | IPA | Rationale |
|
| 54 |
+
|----------------|-----|-----------|
|
| 55 |
+
| q-series (qa, qe, qi, qo) | kʷ-series (kʷa, kʷe, kʷi, kʷo) | Mycenaean labiovelars (Ventris-Chadwick) |
|
| 56 |
+
| z-series (za, ze, zi, zo) | ts-series (tsa, tse, tsi, tso) | Affricate value (Hooker 1980, p.68) |
|
| 57 |
+
| j-series | j-series | Palatal glide /j/ |
|
| 58 |
+
| w-series | w-series | Labio-velar glide /w/ |
|
| 59 |
+
| a2 | ha | Initial aspiration |
|
| 60 |
+
| pu2 | pʰu | Aspirated labial |
|
| 61 |
+
| ra2, ro2 | rja, rjo | Palatalized liquids |
|
| 62 |
+
| r-series | r-series | Covers both /r/ and /l/ (Linear B merger) |
|
| 63 |
+
|
| 64 |
+
### Word List Merging
|
| 65 |
+
|
| 66 |
+
Three-tier source priority:
|
| 67 |
+
1. **IE-CoR** (highest): Expert IPA reconstructions from comparative IE linguistics
|
| 68 |
+
2. **Wiktionary `ts=`**: Scholarly transcriptions from community-edited entries
|
| 69 |
+
3. **Transliteration conversion**: Systematic application of Ventris grid to each syllable
|
| 70 |
+
|
| 71 |
+
Merging logic: Shannon lexicon provides the broadest coverage (2,272 entries). Wiktionary entries override glosses and provide IPA where available. IE-CoR entries override IPA (expert quality) and provide Concepticon IDs.
|
| 72 |
+
|
| 73 |
+
### Word Type Classification
|
| 74 |
+
|
| 75 |
+
From Shannon lexicon definitions:
|
| 76 |
+
- `anthroponym`: Definition contains "anthroponym" as primary classification
|
| 77 |
+
- `toponym`: Definition contains "toponym" as primary classification
|
| 78 |
+
- `theonym`: Deity names (di-wo = Zeus, po-se-da-o = Poseidon)
|
| 79 |
+
- `ethnic`: Ethnic/geographic adjectives
|
| 80 |
+
- `common`: Words with identifiable meanings
|
| 81 |
+
- `unknown`: Meaning obscure or uncertain
|
| 82 |
+
|
| 83 |
+
## Tests Performed
|
| 84 |
+
|
| 85 |
+
1. **Source download verification**: All 4 sources returned HTTP 200 with expected content
|
| 86 |
+
2. **Entry count verification**: Shannon CSV has 2,746 data rows (matches expected), Wiktionary API returned 435 lemmas
|
| 87 |
+
3. **Sign inventory crosscheck**: 10 random codepoints verified against UnicodeData.txt
|
| 88 |
+
4. **IPA mapping verification**: q→kʷ, z→ts, j→j, w→w spot-checked against Ventris-Chadwick
|
| 89 |
+
5. **Expert IPA crosscheck**: 5 Wiktionary `ts=` values verified against output TSV
|
| 90 |
+
6. **Transliteration conversion crosscheck**: 5 words manually verified (e.g., a-ke-ro → akero)
|
| 91 |
+
7. **Adversarial audit**: Team B auditor verified parsing, transformation, and provenance
|
| 92 |
+
|
| 93 |
+
## Cross-Referencing
|
| 94 |
+
|
| 95 |
+
Random sample of 10 entries verified:
|
| 96 |
+
- `a-ke-ro` (messenger): Wiktionary ts=áŋɡelos, cognate with Ancient Greek ἄγγελος
|
| 97 |
+
- `a-ku-ro` (silver): Wiktionary ts=árguros, cognate with Ancient Greek ἀργυρός
|
| 98 |
+
- `a-ne-mo` (wind): IE-CoR IPA anémo-, concept "WIND"
|
| 99 |
+
- Shannon lexicon entries verified against Chadwick & Ventris 1973 citations in definition field
|
| 100 |
+
|
| 101 |
+
## Output Summary
|
| 102 |
+
|
| 103 |
+
| File | Entries | Size |
|
| 104 |
+
|------|--------:|-----:|
|
| 105 |
+
| `data/linear_b/linear_b_signs.tsv` | 211 | 12,875 bytes |
|
| 106 |
+
| `data/linear_b/sign_to_ipa.json` | 74 | 1,134 bytes |
|
| 107 |
+
| `data/linear_b/linear_b_words.tsv` | 2,484 | 276,697 bytes |
|
| 108 |
+
| `data/linear_b/README.md` | — | documentation |
|
| 109 |
+
|
| 110 |
+
### Word List Breakdown
|
| 111 |
+
|
| 112 |
+
| Category | Count | % |
|
| 113 |
+
|----------|------:|--:|
|
| 114 |
+
| Common nouns | 680 | 27.4% |
|
| 115 |
+
| Anthroponyms | 1,285 | 51.7% |
|
| 116 |
+
| Toponyms | 209 | 8.4% |
|
| 117 |
+
| Unknown meaning | ~250 | ~10.1% |
|
| 118 |
+
| Theonyms | ~27 | ~1.1% |
|
| 119 |
+
| Ethnics | ~22 | ~0.9% |
|
| 120 |
+
|
| 121 |
+
### IPA Quality
|
| 122 |
+
|
| 123 |
+
| Source | Count | % |
|
| 124 |
+
|--------|------:|--:|
|
| 125 |
+
| Expert (Wiktionary/IE-CoR) | 77 | 3.1% |
|
| 126 |
+
| Transliteration conversion | 2,407 | 96.9% |
|
| 127 |
+
|
| 128 |
+
## Limitations
|
| 129 |
+
|
| 130 |
+
1. **Low expert IPA coverage**: Only 3.1% of entries have expert IPA. The rest use systematic transliteration conversion.
|
| 131 |
+
2. **Name-heavy dataset**: 73% proper nouns, reflecting administrative tablet content.
|
| 132 |
+
3. **No tablet corpus**: License-compatible full tablet texts not available (DAMOS is CC-BY-NC-SA).
|
| 133 |
+
4. **r/l merger**: Linear B IPA uses /r/ for both /r/ and /l/.
|
| 134 |
+
5. **Shannon gloss quality**: Scholarly concordance notes, not clean dictionary definitions.
|
docs/changelog/INDEX.md
CHANGED
|
@@ -6,6 +6,7 @@ All changes to the `Nacryos/ancient-scripts-datasets` HuggingFace dataset are lo
|
|
| 6 |
|
| 7 |
| Date | Entry | Summary |
|
| 8 |
|------|-------|---------|
|
|
|
|
| 9 |
| 2026-03-19 | [007_phono_quality_flagging.md](007_phono_quality_flagging.md) | Added `Phono_Quality` column (strong/moderate/weak/none/unscored), downgraded 27K Robbeets cross-family to "contested", fixed 2.9K Sino-Tibetan doubt markers |
|
| 10 |
| 2026-03-19 | [006_tier1_cldf_ingestion.md](006_tier1_cldf_ingestion.md) | +573K expert cognate pairs from IE-CoR, Robbeets, Savelyev — 31 new ancient languages, 4-agent adversarial audit |
|
| 11 |
| 2026-03-15 | [005_parquet_conversion.md](005_parquet_conversion.md) | Added Parquet files + YAML dataset card for HF `datasets` library integration |
|
|
|
|
| 6 |
|
| 7 |
| Date | Entry | Summary |
|
| 8 |
|------|-------|---------|
|
| 9 |
+
| 2026-03-19 | [008_linear_b_dataset.md](008_linear_b_dataset.md) | Linear B (Mycenaean Greek) dataset: 211 signs, 74 IPA mappings, 2,484 words from Unicode/Shannon/Wiktionary/IE-CoR |
|
| 10 |
| 2026-03-19 | [007_phono_quality_flagging.md](007_phono_quality_flagging.md) | Added `Phono_Quality` column (strong/moderate/weak/none/unscored), downgraded 27K Robbeets cross-family to "contested", fixed 2.9K Sino-Tibetan doubt markers |
|
| 11 |
| 2026-03-19 | [006_tier1_cldf_ingestion.md](006_tier1_cldf_ingestion.md) | +573K expert cognate pairs from IE-CoR, Robbeets, Savelyev — 31 new ancient languages, 4-agent adversarial audit |
|
| 12 |
| 2026-03-15 | [005_parquet_conversion.md](005_parquet_conversion.md) | Added Parquet files + YAML dataset card for HF `datasets` library integration |
|
scripts/build_linear_b_dataset.py
ADDED
|
@@ -0,0 +1,786 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Build the Linear B (Mycenaean Greek) dataset from downloaded raw data.
|
| 3 |
+
|
| 4 |
+
Parses and combines data from:
|
| 5 |
+
1. Unicode UCD — Sign inventory (88 syllabograms + 123 ideograms)
|
| 6 |
+
2. jhnwnstd/shannon — Linear B Lexicon (2,747 entries)
|
| 7 |
+
3. Wiktionary — Mycenaean Greek lemmas (~435 entries with IPA)
|
| 8 |
+
4. IE-CoR — Existing 43 Mycenaean Greek (gmy) words with expert IPA
|
| 9 |
+
|
| 10 |
+
Output files:
|
| 11 |
+
data/linear_b/linear_b_signs.tsv — Full sign inventory
|
| 12 |
+
data/linear_b/sign_to_ipa.json — Sign transliteration → IPA mapping
|
| 13 |
+
data/linear_b/linear_b_words.tsv — Word list (Word, IPA, SCA, Source, Concept_ID, Cognate_Set_ID)
|
| 14 |
+
data/linear_b/README.md — Documentation
|
| 15 |
+
|
| 16 |
+
Transliteration → IPA mapping:
|
| 17 |
+
Reference: Ventris & Chadwick (1973) "Documents in Mycenaean Greek", 2nd ed.
|
| 18 |
+
The Linear B syllabary encodes CV syllables. The conventional transliteration
|
| 19 |
+
uses Latin characters that are near-IPA with these systematic differences:
|
| 20 |
+
q = /kʷ/ (labiovelar stop)
|
| 21 |
+
z = /ts/ or /dz/ (affricate, exact value debated)
|
| 22 |
+
j = /j/ (palatal glide)
|
| 23 |
+
w = /w/ (labial glide)
|
| 24 |
+
p2 = /pʰ/ (aspirated p)
|
| 25 |
+
t2 = /tʰ/ (aspirated t) — actually written as "pu2" etc. in convention
|
| 26 |
+
|
| 27 |
+
Usage:
|
| 28 |
+
python scripts/build_linear_b_dataset.py
|
| 29 |
+
"""
|
| 30 |
+
|
| 31 |
+
from __future__ import annotations
|
| 32 |
+
|
| 33 |
+
import csv
|
| 34 |
+
import io
|
| 35 |
+
import json
|
| 36 |
+
import re
|
| 37 |
+
import sys
|
| 38 |
+
import unicodedata
|
| 39 |
+
from collections import OrderedDict
|
| 40 |
+
from pathlib import Path
|
| 41 |
+
|
| 42 |
+
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
|
| 43 |
+
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8")
|
| 44 |
+
|
| 45 |
+
ROOT = Path(__file__).resolve().parent.parent
|
| 46 |
+
RAW_DIR = ROOT / "data" / "training" / "raw" / "linear_b"
|
| 47 |
+
OUT_DIR = ROOT / "data" / "linear_b"
|
| 48 |
+
|
| 49 |
+
# ── Linear B Unicode ranges ──
|
| 50 |
+
LINB_SYLLABARY_START = 0x10000
|
| 51 |
+
LINB_SYLLABARY_END = 0x1007F
|
| 52 |
+
LINB_IDEOGRAM_START = 0x10080
|
| 53 |
+
LINB_IDEOGRAM_END = 0x100FF
|
| 54 |
+
|
| 55 |
+
# ── Transliteration → IPA mapping ──
|
| 56 |
+
# Reference: Ventris & Chadwick (1973), "Documents in Mycenaean Greek", 2nd ed.
|
| 57 |
+
# Palmer (1963), "The Interpretation of Mycenaean Greek Texts"
|
| 58 |
+
# Hooker (1980), "Linear B: An Introduction"
|
| 59 |
+
#
|
| 60 |
+
# The conventional transliteration values are based on the Ventris decipherment
|
| 61 |
+
# (1952) and CIPEM standard. Most consonants map directly; the key differences are:
|
| 62 |
+
# - q-series represents labiovelars /kʷ/, not /k/
|
| 63 |
+
# - z-series represents affricates, transcribed as /ts/ (Hooker 1980: p.68)
|
| 64 |
+
# - j represents /j/ (palatal approximant)
|
| 65 |
+
# - w represents /w/ (labio-velar approximant)
|
| 66 |
+
#
|
| 67 |
+
# The "2" variants (a2, a3, pu2, etc.) represent:
|
| 68 |
+
# - a2 = /ha/ (initial aspiration)
|
| 69 |
+
# - a3 = /ai/ (diphthong)
|
| 70 |
+
# - pu2 = /pʰu/ (aspirated)
|
| 71 |
+
# - ra2 = /rja/ (palatalized)
|
| 72 |
+
# - ro2 = /rjo/
|
| 73 |
+
# - ta2 = /tja/
|
| 74 |
+
# - nwa = /nwa/
|
| 75 |
+
#
|
| 76 |
+
# For undeciphered signs (*18, *19, etc.), IPA is left as "-".
|
| 77 |
+
|
| 78 |
+
TRANSLIT_TO_IPA = {
|
| 79 |
+
# Pure vowels
|
| 80 |
+
"a": "a", "e": "e", "i": "i", "o": "o", "u": "u",
|
| 81 |
+
# d-series
|
| 82 |
+
"da": "da", "de": "de", "di": "di", "do": "do", "du": "du",
|
| 83 |
+
# j-series (palatal glide)
|
| 84 |
+
"ja": "ja", "je": "je", "jo": "jo", "ju": "ju",
|
| 85 |
+
# k-series
|
| 86 |
+
"ka": "ka", "ke": "ke", "ki": "ki", "ko": "ko", "ku": "ku",
|
| 87 |
+
# m-series
|
| 88 |
+
"ma": "ma", "me": "me", "mi": "mi", "mo": "mo", "mu": "mu",
|
| 89 |
+
# n-series
|
| 90 |
+
"na": "na", "ne": "ne", "ni": "ni", "no": "no", "nu": "nu",
|
| 91 |
+
# p-series
|
| 92 |
+
"pa": "pa", "pe": "pe", "pi": "pi", "po": "po", "pu": "pu",
|
| 93 |
+
# q-series (labiovelars)
|
| 94 |
+
"qa": "kʷa", "qe": "kʷe", "qi": "kʷi", "qo": "kʷo",
|
| 95 |
+
# r-series (covers both /r/ and /l/ — Linear B does not distinguish)
|
| 96 |
+
"ra": "ra", "re": "re", "ri": "ri", "ro": "ro", "ru": "ru",
|
| 97 |
+
# s-series
|
| 98 |
+
"sa": "sa", "se": "se", "si": "si", "so": "so", "su": "su",
|
| 99 |
+
# t-series
|
| 100 |
+
"ta": "ta", "te": "te", "ti": "ti", "to": "to", "tu": "tu",
|
| 101 |
+
# w-series
|
| 102 |
+
"wa": "wa", "we": "we", "wi": "wi", "wo": "wo",
|
| 103 |
+
# z-series (affricates: Hooker 1980, p.68)
|
| 104 |
+
"za": "tsa", "ze": "tse", "zi": "tsi", "zo": "tso", "zu": "tsu",
|
| 105 |
+
# Special/variant signs
|
| 106 |
+
"a2": "ha", "a3": "ai",
|
| 107 |
+
"nwa": "nwa",
|
| 108 |
+
"pu2": "pʰu",
|
| 109 |
+
"ra2": "rja", "ra3": "rai",
|
| 110 |
+
"ro2": "rjo",
|
| 111 |
+
"ta2": "tja",
|
| 112 |
+
"two": "two",
|
| 113 |
+
"dwe": "dwe",
|
| 114 |
+
"dwo": "dwo",
|
| 115 |
+
"twe": "twe",
|
| 116 |
+
# Undeciphered signs — no IPA
|
| 117 |
+
}
|
| 118 |
+
|
| 119 |
+
|
| 120 |
+
def parse_unicode_signs(ucd_path: Path) -> list[dict]:
|
| 121 |
+
"""Parse Linear B signs from UnicodeData.txt.
|
| 122 |
+
|
| 123 |
+
Each line has format: codepoint;name;category;...
|
| 124 |
+
We extract signs in U+10000-U+100FF range.
|
| 125 |
+
"""
|
| 126 |
+
signs = []
|
| 127 |
+
with open(ucd_path, encoding="utf-8") as f:
|
| 128 |
+
for line in f:
|
| 129 |
+
parts = line.strip().split(";")
|
| 130 |
+
if len(parts) < 2:
|
| 131 |
+
continue
|
| 132 |
+
cp_hex = parts[0]
|
| 133 |
+
name = parts[1]
|
| 134 |
+
cp = int(cp_hex, 16)
|
| 135 |
+
|
| 136 |
+
if LINB_SYLLABARY_START <= cp <= LINB_SYLLABARY_END:
|
| 137 |
+
sign_type = "syllabogram"
|
| 138 |
+
elif LINB_IDEOGRAM_START <= cp <= LINB_IDEOGRAM_END:
|
| 139 |
+
sign_type = "ideogram"
|
| 140 |
+
else:
|
| 141 |
+
continue
|
| 142 |
+
|
| 143 |
+
# Parse Bennett number and phonetic value from name
|
| 144 |
+
# Format: "LINEAR B SYLLABLE B008 A" or "LINEAR B IDEOGRAM B100 MAN"
|
| 145 |
+
bennett = ""
|
| 146 |
+
phonetic = ""
|
| 147 |
+
m = re.match(r"LINEAR B (?:SYLLABLE|SYMBOL) (B\d+)\s*(.*)", name)
|
| 148 |
+
if m:
|
| 149 |
+
bennett = m.group(1)
|
| 150 |
+
phonetic = m.group(2).strip().lower() if m.group(2) else ""
|
| 151 |
+
else:
|
| 152 |
+
m = re.match(r"LINEAR B IDEOGRAM (B\d+\w*)\s*(.*)", name)
|
| 153 |
+
if m:
|
| 154 |
+
bennett = m.group(1)
|
| 155 |
+
phonetic = m.group(2).strip() if m.group(2) else ""
|
| 156 |
+
|
| 157 |
+
# Get IPA from transliteration
|
| 158 |
+
ipa = TRANSLIT_TO_IPA.get(phonetic, "-") if phonetic else "-"
|
| 159 |
+
|
| 160 |
+
signs.append({
|
| 161 |
+
"Codepoint": f"U+{cp_hex}",
|
| 162 |
+
"Unicode_Char": chr(cp),
|
| 163 |
+
"Bennett_Number": bennett,
|
| 164 |
+
"Name": name,
|
| 165 |
+
"Type": sign_type,
|
| 166 |
+
"Transliteration": phonetic if phonetic else "-",
|
| 167 |
+
"IPA": ipa,
|
| 168 |
+
})
|
| 169 |
+
|
| 170 |
+
return signs
|
| 171 |
+
|
| 172 |
+
|
| 173 |
+
def parse_shannon_lexicon(csv_path: Path) -> list[dict]:
|
| 174 |
+
"""Parse jhnwnstd/shannon Linear_B_Lexicon.csv.
|
| 175 |
+
|
| 176 |
+
Columns: word (Unicode), transcription (Latin), definition (scholarly notes)
|
| 177 |
+
We extract: transliteration, clean definition, and classify as common/proper noun.
|
| 178 |
+
"""
|
| 179 |
+
entries = []
|
| 180 |
+
with open(csv_path, encoding="utf-8") as f:
|
| 181 |
+
reader = csv.DictReader(f)
|
| 182 |
+
for row in reader:
|
| 183 |
+
word_unicode = row.get("word", "").strip()
|
| 184 |
+
translit = row.get("transcription", "").strip()
|
| 185 |
+
definition = row.get("definition", "").strip()
|
| 186 |
+
|
| 187 |
+
if not translit:
|
| 188 |
+
continue
|
| 189 |
+
|
| 190 |
+
# Classify: is this a common noun or anthroponym/toponym?
|
| 191 |
+
def_lower = definition.lower()
|
| 192 |
+
is_anthroponym = "anthroponym" in def_lower and ":" not in def_lower.split("anthroponym")[0][-20:]
|
| 193 |
+
is_toponym = "toponym" in def_lower and ":" not in def_lower.split("toponym")[0][-20:]
|
| 194 |
+
|
| 195 |
+
# Try to extract a clean gloss from the definition
|
| 196 |
+
# Patterns:
|
| 197 |
+
# "Chadwick & Ventris 1973: anthroponym" → type=proper, gloss=anthroponym
|
| 198 |
+
# "Chadwick & Ventris 1973: figs" → type=common, gloss=figs
|
| 199 |
+
gloss = ""
|
| 200 |
+
# Look for meaning after first colon
|
| 201 |
+
colon_parts = definition.split(":", 1)
|
| 202 |
+
if len(colon_parts) > 1:
|
| 203 |
+
after_colon = colon_parts[1].strip()
|
| 204 |
+
# Take the first meaningful phrase (up to next reference or semicolon)
|
| 205 |
+
# Clean up common noise
|
| 206 |
+
gloss_match = re.match(
|
| 207 |
+
r"([\w\s,/()?.!'\-]+?)(?:\s+(?:Chadwick|McArthur|Witczak|van |Palmer|"
|
| 208 |
+
r"Ruijgh|Bernabé|Appears|KN|PY|MY|TH|TI))",
|
| 209 |
+
after_colon,
|
| 210 |
+
)
|
| 211 |
+
if gloss_match:
|
| 212 |
+
gloss = gloss_match.group(1).strip().rstrip(",;.")
|
| 213 |
+
else:
|
| 214 |
+
# Take first 80 chars as fallback
|
| 215 |
+
gloss = after_colon[:80].strip()
|
| 216 |
+
# Cut at first reference-like pattern
|
| 217 |
+
for cutoff in ["Chadwick", "McArthur", "Ventris", "John and"]:
|
| 218 |
+
if cutoff in gloss:
|
| 219 |
+
gloss = gloss[: gloss.index(cutoff)].strip().rstrip(",;.")
|
| 220 |
+
break
|
| 221 |
+
|
| 222 |
+
# Determine word type
|
| 223 |
+
if "anthroponym" in gloss.lower():
|
| 224 |
+
word_type = "anthroponym"
|
| 225 |
+
elif "toponym" in gloss.lower():
|
| 226 |
+
word_type = "toponym"
|
| 227 |
+
elif "theonym" in gloss.lower():
|
| 228 |
+
word_type = "theonym"
|
| 229 |
+
elif "ethnic" in gloss.lower():
|
| 230 |
+
word_type = "ethnic"
|
| 231 |
+
elif not gloss or gloss.lower() in ("meaning obscure", "meaning unknown",
|
| 232 |
+
"meaning uncertain", "hapax"):
|
| 233 |
+
word_type = "unknown"
|
| 234 |
+
else:
|
| 235 |
+
word_type = "common"
|
| 236 |
+
|
| 237 |
+
entries.append({
|
| 238 |
+
"Word_Unicode": word_unicode,
|
| 239 |
+
"Transliteration": translit,
|
| 240 |
+
"Gloss": gloss,
|
| 241 |
+
"Word_Type": word_type,
|
| 242 |
+
"Source": "shannon_lexicon",
|
| 243 |
+
})
|
| 244 |
+
|
| 245 |
+
return entries
|
| 246 |
+
|
| 247 |
+
|
| 248 |
+
def unicode_to_translit(title: str) -> str:
|
| 249 |
+
"""Convert Linear B Unicode characters in a title to transliteration.
|
| 250 |
+
|
| 251 |
+
Uses Python's unicodedata to get character names, then extracts the
|
| 252 |
+
phonetic value from names like "LINEAR B SYLLABLE B008 A" → "a".
|
| 253 |
+
"""
|
| 254 |
+
parts = []
|
| 255 |
+
for ch in title:
|
| 256 |
+
cp = ord(ch)
|
| 257 |
+
if LINB_SYLLABARY_START <= cp <= LINB_SYLLABARY_END:
|
| 258 |
+
try:
|
| 259 |
+
name = unicodedata.name(ch, "")
|
| 260 |
+
m = re.match(r"LINEAR B (?:SYLLABLE|SYMBOL) B\d+\s*(.*)", name)
|
| 261 |
+
if m and m.group(1):
|
| 262 |
+
parts.append(m.group(1).strip().lower())
|
| 263 |
+
else:
|
| 264 |
+
# Undeciphered symbol
|
| 265 |
+
m2 = re.match(r"LINEAR B SYMBOL (B\d+)", name)
|
| 266 |
+
if m2:
|
| 267 |
+
parts.append(f"*{m2.group(1)[1:]}")
|
| 268 |
+
except ValueError:
|
| 269 |
+
pass
|
| 270 |
+
elif LINB_IDEOGRAM_START <= cp <= LINB_IDEOGRAM_END:
|
| 271 |
+
# Ideograms — skip or mark
|
| 272 |
+
try:
|
| 273 |
+
name = unicodedata.name(ch, "")
|
| 274 |
+
m = re.match(r"LINEAR B IDEOGRAM (B\d+\w*)\s*(.*)", name)
|
| 275 |
+
if m:
|
| 276 |
+
parts.append(f"[{m.group(2).strip() or m.group(1)}]")
|
| 277 |
+
except ValueError:
|
| 278 |
+
pass
|
| 279 |
+
# Skip non-Linear B characters (spaces, combining marks, etc.)
|
| 280 |
+
return "-".join(parts) if parts else ""
|
| 281 |
+
|
| 282 |
+
|
| 283 |
+
def parse_wiktionary_lemmas(json_path: Path) -> list[dict]:
|
| 284 |
+
"""Parse Wiktionary Mycenaean Greek lemma data.
|
| 285 |
+
|
| 286 |
+
Extract from wikitext:
|
| 287 |
+
- ts= parameter → IPA transcription
|
| 288 |
+
- # [[gloss]] → English meaning
|
| 289 |
+
- head template → part of speech
|
| 290 |
+
"""
|
| 291 |
+
with open(json_path, encoding="utf-8") as f:
|
| 292 |
+
lemmas = json.load(f)
|
| 293 |
+
|
| 294 |
+
entries = []
|
| 295 |
+
for lemma in lemmas:
|
| 296 |
+
title = lemma["title"]
|
| 297 |
+
wikitext = lemma["wikitext"]
|
| 298 |
+
|
| 299 |
+
# Skip if not Mycenaean Greek
|
| 300 |
+
if "==Mycenaean Greek==" not in wikitext:
|
| 301 |
+
continue
|
| 302 |
+
|
| 303 |
+
# Convert Unicode title to transliteration
|
| 304 |
+
title_translit = unicode_to_translit(title)
|
| 305 |
+
|
| 306 |
+
# Skip ideogram-only entries (titles that are purely ideograms or *NNN)
|
| 307 |
+
if not title_translit or all(
|
| 308 |
+
p.startswith("[") or p.startswith("*") for p in title_translit.split("-") if p
|
| 309 |
+
):
|
| 310 |
+
# Check if it has a tr= parameter we could use instead
|
| 311 |
+
tr_check = re.search(r"\|tr=([^|}]+)", wikitext)
|
| 312 |
+
if not tr_check:
|
| 313 |
+
continue
|
| 314 |
+
|
| 315 |
+
# Extract IPA from ts= parameter
|
| 316 |
+
ipa = ""
|
| 317 |
+
ts_match = re.search(r"\|ts=([^|}]+)", wikitext)
|
| 318 |
+
if ts_match:
|
| 319 |
+
ipa = ts_match.group(1).strip()
|
| 320 |
+
|
| 321 |
+
# Extract transliteration: prefer tr= parameter, fallback to Unicode conversion
|
| 322 |
+
translit = ""
|
| 323 |
+
tr_match = re.search(r"\|tr=([^|}]+)", wikitext)
|
| 324 |
+
if tr_match:
|
| 325 |
+
translit = tr_match.group(1).strip()
|
| 326 |
+
if not translit:
|
| 327 |
+
translit = title_translit
|
| 328 |
+
|
| 329 |
+
# Clean transliteration: remove tablet context, bold markers, etc.
|
| 330 |
+
# Wiktionary titles sometimes embed context like "'''di-wo''' u-ta-jo-jo"
|
| 331 |
+
if translit:
|
| 332 |
+
# Remove wikitext bold markers
|
| 333 |
+
translit = translit.replace("'''", "")
|
| 334 |
+
# If transliteration contains spaces (tablet context), take first word only
|
| 335 |
+
if " " in translit:
|
| 336 |
+
translit = translit.split()[0]
|
| 337 |
+
# Remove trailing punctuation
|
| 338 |
+
translit = translit.strip(".,;:!?")
|
| 339 |
+
# Skip if still contains non-transliteration characters
|
| 340 |
+
if re.search(r"[<>\[\]{}|=]", translit):
|
| 341 |
+
continue
|
| 342 |
+
|
| 343 |
+
# Skip entries with no usable transliteration
|
| 344 |
+
if not translit or translit == "-":
|
| 345 |
+
continue
|
| 346 |
+
|
| 347 |
+
# Skip pure ideogram/logogram entries (*NNN without syllabic content)
|
| 348 |
+
# These are ideograms like *142, *150, etc. that have no phonetic reading
|
| 349 |
+
translit_parts = [p for p in translit.split("-") if p]
|
| 350 |
+
syllabic_parts = [p for p in translit_parts
|
| 351 |
+
if not p.startswith("*") and not p.startswith("[")]
|
| 352 |
+
if not syllabic_parts:
|
| 353 |
+
continue # Skip: no syllabic content at all
|
| 354 |
+
|
| 355 |
+
# Extract gloss from definition lines (# [[word]] or # text)
|
| 356 |
+
glosses = []
|
| 357 |
+
for line in wikitext.split("\n"):
|
| 358 |
+
line = line.strip()
|
| 359 |
+
if line.startswith("# ") and not line.startswith("# {{def-uncertain"):
|
| 360 |
+
# Clean wikitext markup
|
| 361 |
+
gloss = line[2:]
|
| 362 |
+
# Remove templates but preserve content for some
|
| 363 |
+
gloss = re.sub(r"\{\{l\|en\|([^|}]+)[^}]*\}\}", r"\1", gloss)
|
| 364 |
+
gloss = re.sub(r"\{\{[^}]*\}\}", "", gloss)
|
| 365 |
+
# Remove links but keep text: [[word|display]] → display, [[word]] → word
|
| 366 |
+
gloss = re.sub(r"\[\[(?:[^|\]]*\|)?([^\]]*)\]\]", r"\1", gloss)
|
| 367 |
+
# Remove remaining markup
|
| 368 |
+
gloss = re.sub(r"['\[\]]", "", gloss)
|
| 369 |
+
# Remove wikitext remnants like }}, {{, etc.
|
| 370 |
+
gloss = re.sub(r"\}\}|\{\{", "", gloss)
|
| 371 |
+
# Remove leading/trailing whitespace and orphaned punctuation
|
| 372 |
+
gloss = gloss.strip().strip(".,;:")
|
| 373 |
+
if gloss and len(gloss) > 1:
|
| 374 |
+
glosses.append(gloss)
|
| 375 |
+
|
| 376 |
+
# Extract part of speech
|
| 377 |
+
pos = ""
|
| 378 |
+
pos_match = re.search(r"\{\{head\|gmy\|(\w+)", wikitext)
|
| 379 |
+
if pos_match:
|
| 380 |
+
pos = pos_match.group(1)
|
| 381 |
+
|
| 382 |
+
# Extract etymology cognates (useful for Concept_ID mapping)
|
| 383 |
+
cognates = []
|
| 384 |
+
cog_matches = re.finditer(r"\{\{cog\|grc\|([^|}]+)", wikitext)
|
| 385 |
+
for m in cog_matches:
|
| 386 |
+
cognates.append(m.group(1))
|
| 387 |
+
|
| 388 |
+
gloss_text = "; ".join(glosses) if glosses else "-"
|
| 389 |
+
|
| 390 |
+
# Determine word type from POS and content
|
| 391 |
+
word_type = "common"
|
| 392 |
+
if pos == "proper noun":
|
| 393 |
+
word_type = "proper"
|
| 394 |
+
elif "toponym" in gloss_text.lower():
|
| 395 |
+
word_type = "toponym"
|
| 396 |
+
elif "anthroponym" in gloss_text.lower():
|
| 397 |
+
word_type = "anthroponym"
|
| 398 |
+
|
| 399 |
+
entries.append({
|
| 400 |
+
"Title_Unicode": title,
|
| 401 |
+
"Transliteration": translit,
|
| 402 |
+
"IPA": ipa,
|
| 403 |
+
"Gloss": gloss_text,
|
| 404 |
+
"POS": pos,
|
| 405 |
+
"Word_Type": word_type,
|
| 406 |
+
"Greek_Cognate": cognates[0] if cognates else "-",
|
| 407 |
+
"Source": "wiktionary_gmy",
|
| 408 |
+
})
|
| 409 |
+
|
| 410 |
+
return entries
|
| 411 |
+
|
| 412 |
+
|
| 413 |
+
def transliterate_to_ipa(translit: str) -> str:
|
| 414 |
+
"""Convert Linear B transliteration to IPA.
|
| 415 |
+
|
| 416 |
+
Reference: Ventris & Chadwick (1973), Hooker (1980)
|
| 417 |
+
|
| 418 |
+
Linear B transliterations use the format: syllable-syllable-syllable
|
| 419 |
+
where each syllable is a CV value from the Ventris grid.
|
| 420 |
+
E.g., "a-ke-ro" → "akero", "pa-ka-na" → "pakana"
|
| 421 |
+
"""
|
| 422 |
+
if not translit or translit == "-":
|
| 423 |
+
return "-"
|
| 424 |
+
|
| 425 |
+
# Remove leading/trailing hyphens and whitespace
|
| 426 |
+
translit = translit.strip().strip("-")
|
| 427 |
+
|
| 428 |
+
# Split on hyphens
|
| 429 |
+
syllables = translit.split("-")
|
| 430 |
+
|
| 431 |
+
ipa_parts = []
|
| 432 |
+
for syl in syllables:
|
| 433 |
+
syl = syl.strip().lower()
|
| 434 |
+
if not syl:
|
| 435 |
+
continue
|
| 436 |
+
# Check for undeciphered signs (*18, *47, etc.)
|
| 437 |
+
if syl.startswith("*"):
|
| 438 |
+
ipa_parts.append("?")
|
| 439 |
+
continue
|
| 440 |
+
# Look up in mapping
|
| 441 |
+
if syl in TRANSLIT_TO_IPA:
|
| 442 |
+
ipa_parts.append(TRANSLIT_TO_IPA[syl])
|
| 443 |
+
else:
|
| 444 |
+
# Unknown syllable — keep as-is (it may already be a valid value)
|
| 445 |
+
ipa_parts.append(syl)
|
| 446 |
+
|
| 447 |
+
return "".join(ipa_parts)
|
| 448 |
+
|
| 449 |
+
|
| 450 |
+
def load_iecor_gmy_words() -> list[dict]:
|
| 451 |
+
"""Load existing Mycenaean Greek (gmy) words from cognate pairs Parquet."""
|
| 452 |
+
try:
|
| 453 |
+
import pyarrow.parquet as pq
|
| 454 |
+
import pyarrow.compute as pc
|
| 455 |
+
except ImportError:
|
| 456 |
+
print(" [WARN] pyarrow not available, skipping IE-CoR data")
|
| 457 |
+
return []
|
| 458 |
+
|
| 459 |
+
parquet_path = ROOT / "data" / "training" / "cognate_pairs" / "cognate_pairs_inherited.parquet"
|
| 460 |
+
if not parquet_path.exists():
|
| 461 |
+
return []
|
| 462 |
+
|
| 463 |
+
t = pq.read_table(parquet_path)
|
| 464 |
+
mask_a = pc.equal(t["Lang_A"], "gmy")
|
| 465 |
+
mask_b = pc.equal(t["Lang_B"], "gmy")
|
| 466 |
+
|
| 467 |
+
words = {} # translit → {ipa, concept_ids}
|
| 468 |
+
|
| 469 |
+
# Extract from Lang_A side
|
| 470 |
+
gmy_a = t.filter(mask_a)
|
| 471 |
+
for i in range(gmy_a.num_rows):
|
| 472 |
+
w = gmy_a.column("Word_A")[i].as_py()
|
| 473 |
+
ipa = gmy_a.column("IPA_A")[i].as_py()
|
| 474 |
+
cid = gmy_a.column("Concept_ID")[i].as_py()
|
| 475 |
+
if w and w != "-":
|
| 476 |
+
if w not in words:
|
| 477 |
+
words[w] = {"ipa": ipa or "-", "concept_ids": set()}
|
| 478 |
+
if cid and cid != "-":
|
| 479 |
+
words[w]["concept_ids"].add(cid)
|
| 480 |
+
|
| 481 |
+
# Extract from Lang_B side
|
| 482 |
+
gmy_b = t.filter(mask_b)
|
| 483 |
+
for i in range(gmy_b.num_rows):
|
| 484 |
+
w = gmy_b.column("Word_B")[i].as_py()
|
| 485 |
+
ipa = gmy_b.column("IPA_B")[i].as_py()
|
| 486 |
+
cid = gmy_b.column("Concept_ID")[i].as_py()
|
| 487 |
+
if w and w != "-":
|
| 488 |
+
if w not in words:
|
| 489 |
+
words[w] = {"ipa": ipa or "-", "concept_ids": set()}
|
| 490 |
+
if cid and cid != "-":
|
| 491 |
+
words[w]["concept_ids"].add(cid)
|
| 492 |
+
|
| 493 |
+
result = []
|
| 494 |
+
for translit, data in words.items():
|
| 495 |
+
result.append({
|
| 496 |
+
"Transliteration": translit,
|
| 497 |
+
"IPA": data["ipa"],
|
| 498 |
+
"Concept_IDs": ",".join(sorted(data["concept_ids"])),
|
| 499 |
+
"Source": "iecor",
|
| 500 |
+
})
|
| 501 |
+
|
| 502 |
+
return result
|
| 503 |
+
|
| 504 |
+
|
| 505 |
+
def build_sign_inventory(signs: list[dict]) -> None:
|
| 506 |
+
"""Write sign inventory TSV and sign_to_ipa.json."""
|
| 507 |
+
OUT_DIR.mkdir(parents=True, exist_ok=True)
|
| 508 |
+
|
| 509 |
+
# TSV
|
| 510 |
+
tsv_path = OUT_DIR / "linear_b_signs.tsv"
|
| 511 |
+
cols = ["Codepoint", "Unicode_Char", "Bennett_Number", "Name", "Type",
|
| 512 |
+
"Transliteration", "IPA"]
|
| 513 |
+
with open(tsv_path, "w", encoding="utf-8", newline="") as f:
|
| 514 |
+
writer = csv.DictWriter(f, fieldnames=cols, delimiter="\t")
|
| 515 |
+
writer.writeheader()
|
| 516 |
+
for sign in signs:
|
| 517 |
+
writer.writerow(sign)
|
| 518 |
+
print(f" Signs TSV: {len(signs)} signs → {tsv_path}")
|
| 519 |
+
|
| 520 |
+
# sign_to_ipa.json (only syllabograms with phonetic values)
|
| 521 |
+
sign_map = OrderedDict()
|
| 522 |
+
for sign in signs:
|
| 523 |
+
if sign["Type"] == "syllabogram" and sign["Transliteration"] != "-":
|
| 524 |
+
sign_map[sign["Transliteration"]] = sign["IPA"]
|
| 525 |
+
json_path = OUT_DIR / "sign_to_ipa.json"
|
| 526 |
+
json_path.write_text(json.dumps(sign_map, ensure_ascii=False, indent=2), encoding="utf-8")
|
| 527 |
+
print(f" sign_to_ipa.json: {len(sign_map)} mappings → {json_path}")
|
| 528 |
+
|
| 529 |
+
# Stats
|
| 530 |
+
syllabograms = [s for s in signs if s["Type"] == "syllabogram"]
|
| 531 |
+
ideograms = [s for s in signs if s["Type"] == "ideogram"]
|
| 532 |
+
with_phonetic = [s for s in syllabograms if s["Transliteration"] != "-"]
|
| 533 |
+
print(f" Syllabograms: {len(syllabograms)} ({len(with_phonetic)} with phonetic values)")
|
| 534 |
+
print(f" Ideograms: {len(ideograms)}")
|
| 535 |
+
|
| 536 |
+
|
| 537 |
+
def build_word_list(
|
| 538 |
+
shannon_entries: list[dict],
|
| 539 |
+
wiktionary_entries: list[dict],
|
| 540 |
+
iecor_entries: list[dict],
|
| 541 |
+
) -> None:
|
| 542 |
+
"""Merge all word sources and write linear_b_words.tsv."""
|
| 543 |
+
# Priority order for IPA: IE-CoR (expert) > Wiktionary (ts=) > transliteration conversion
|
| 544 |
+
# Priority order for glosses: Wiktionary > Shannon > IE-CoR (no glosses)
|
| 545 |
+
|
| 546 |
+
# Index IE-CoR by transliteration
|
| 547 |
+
iecor_by_translit = {}
|
| 548 |
+
for e in iecor_entries:
|
| 549 |
+
t = e["Transliteration"]
|
| 550 |
+
iecor_by_translit[t] = e
|
| 551 |
+
|
| 552 |
+
# Index Wiktionary by transliteration
|
| 553 |
+
wikt_by_translit = {}
|
| 554 |
+
for e in wiktionary_entries:
|
| 555 |
+
t = e["Transliteration"]
|
| 556 |
+
if t:
|
| 557 |
+
wikt_by_translit[t] = e
|
| 558 |
+
|
| 559 |
+
# Build merged word list
|
| 560 |
+
# Key: transliteration (hyphenated form like "a-ke-ro")
|
| 561 |
+
all_words = {} # translit → merged dict
|
| 562 |
+
|
| 563 |
+
# 1. Start with Shannon entries (largest source)
|
| 564 |
+
for e in shannon_entries:
|
| 565 |
+
t = e["Transliteration"]
|
| 566 |
+
if t not in all_words:
|
| 567 |
+
all_words[t] = {
|
| 568 |
+
"Transliteration": t,
|
| 569 |
+
"Gloss": e["Gloss"],
|
| 570 |
+
"Word_Type": e["Word_Type"],
|
| 571 |
+
"IPA": "-",
|
| 572 |
+
"Source": "shannon_lexicon",
|
| 573 |
+
"Concept_ID": "-",
|
| 574 |
+
"Cognate_Set_ID": "-",
|
| 575 |
+
}
|
| 576 |
+
|
| 577 |
+
# 2. Merge Wiktionary (better glosses, has IPA)
|
| 578 |
+
for e in wiktionary_entries:
|
| 579 |
+
t = e["Transliteration"]
|
| 580 |
+
if not t:
|
| 581 |
+
continue
|
| 582 |
+
# Skip non-standard transliterations from Wiktionary:
|
| 583 |
+
# - Single letters (measure symbols like L, N, P, Q, S, T, V, Z)
|
| 584 |
+
# - ALL-CAPS abbreviations (AES, KAPO, etc.) — these are ideogram labels
|
| 585 |
+
# - Entries that look like Greek or modern language forms
|
| 586 |
+
if len(t) <= 2 and t.isalpha() and "-" not in t:
|
| 587 |
+
continue
|
| 588 |
+
if t.isupper() and len(t) <= 6:
|
| 589 |
+
continue
|
| 590 |
+
# Valid transliterations use lowercase with hyphens (a-ke-ro)
|
| 591 |
+
# or start with * for undeciphered signs
|
| 592 |
+
if not re.match(r'^[\-a-z0-9*]+$', t.replace("-", "")):
|
| 593 |
+
continue
|
| 594 |
+
if t in all_words:
|
| 595 |
+
# Update gloss if Wiktionary has a better one
|
| 596 |
+
if e["Gloss"] != "-":
|
| 597 |
+
all_words[t]["Gloss"] = e["Gloss"]
|
| 598 |
+
if e["IPA"]:
|
| 599 |
+
all_words[t]["IPA"] = e["IPA"]
|
| 600 |
+
all_words[t]["Source"] = "wiktionary_gmy"
|
| 601 |
+
if e["Word_Type"] != "common":
|
| 602 |
+
all_words[t]["Word_Type"] = e["Word_Type"]
|
| 603 |
+
else:
|
| 604 |
+
all_words[t] = {
|
| 605 |
+
"Transliteration": t,
|
| 606 |
+
"Gloss": e["Gloss"],
|
| 607 |
+
"Word_Type": e["Word_Type"],
|
| 608 |
+
"IPA": e["IPA"] if e["IPA"] else "-",
|
| 609 |
+
"Source": "wiktionary_gmy",
|
| 610 |
+
"Concept_ID": "-",
|
| 611 |
+
"Cognate_Set_ID": "-",
|
| 612 |
+
}
|
| 613 |
+
|
| 614 |
+
# 3. Merge IE-CoR (best IPA, has concept IDs)
|
| 615 |
+
for e in iecor_entries:
|
| 616 |
+
t = e["Transliteration"]
|
| 617 |
+
if t in all_words:
|
| 618 |
+
# IE-CoR IPA takes priority (expert reconstructions)
|
| 619 |
+
if e["IPA"] and e["IPA"] != "-":
|
| 620 |
+
all_words[t]["IPA"] = e["IPA"]
|
| 621 |
+
if e["Concept_IDs"]:
|
| 622 |
+
all_words[t]["Concept_ID"] = e["Concept_IDs"]
|
| 623 |
+
# Mark as having IE-CoR data
|
| 624 |
+
all_words[t]["Source"] = "iecor+" + all_words[t]["Source"]
|
| 625 |
+
else:
|
| 626 |
+
all_words[t] = {
|
| 627 |
+
"Transliteration": t,
|
| 628 |
+
"Gloss": "-",
|
| 629 |
+
"Word_Type": "common",
|
| 630 |
+
"IPA": e["IPA"],
|
| 631 |
+
"Source": "iecor",
|
| 632 |
+
"Concept_ID": e.get("Concept_IDs", "-"),
|
| 633 |
+
"Cognate_Set_ID": "-",
|
| 634 |
+
}
|
| 635 |
+
|
| 636 |
+
# 4. For entries without IPA, generate from transliteration
|
| 637 |
+
for t, entry in all_words.items():
|
| 638 |
+
if entry["IPA"] == "-" or not entry["IPA"]:
|
| 639 |
+
entry["IPA"] = transliterate_to_ipa(t)
|
| 640 |
+
if entry["IPA"] != "-":
|
| 641 |
+
entry["IPA_Source"] = "translit_conversion"
|
| 642 |
+
else:
|
| 643 |
+
entry["IPA_Source"] = "none"
|
| 644 |
+
else:
|
| 645 |
+
entry["IPA_Source"] = "expert"
|
| 646 |
+
|
| 647 |
+
# 5. Compute SCA (Sound Class Alphabet) encoding
|
| 648 |
+
try:
|
| 649 |
+
sys.path.insert(0, str(ROOT / "cognate_pipeline" / "src"))
|
| 650 |
+
from cognate_pipeline.normalise.sound_class import ipa_to_sound_class
|
| 651 |
+
has_sca = True
|
| 652 |
+
except ImportError:
|
| 653 |
+
has_sca = False
|
| 654 |
+
print(" [WARN] cognate_pipeline not available, SCA will be computed from IPA directly")
|
| 655 |
+
|
| 656 |
+
for entry in all_words.values():
|
| 657 |
+
if has_sca and entry["IPA"] != "-":
|
| 658 |
+
try:
|
| 659 |
+
entry["SCA"] = ipa_to_sound_class(entry["IPA"])
|
| 660 |
+
except Exception:
|
| 661 |
+
entry["SCA"] = entry["IPA"].upper()
|
| 662 |
+
elif entry["IPA"] != "-":
|
| 663 |
+
# Simple uppercase fallback
|
| 664 |
+
entry["SCA"] = entry["IPA"].upper()
|
| 665 |
+
else:
|
| 666 |
+
entry["SCA"] = "-"
|
| 667 |
+
|
| 668 |
+
# Write output
|
| 669 |
+
OUT_DIR.mkdir(parents=True, exist_ok=True)
|
| 670 |
+
tsv_path = OUT_DIR / "linear_b_words.tsv"
|
| 671 |
+
cols = ["Word", "IPA", "SCA", "Source", "Concept_ID", "Cognate_Set_ID",
|
| 672 |
+
"Gloss", "Word_Type", "IPA_Source"]
|
| 673 |
+
|
| 674 |
+
# Sort: common nouns first, then by transliteration
|
| 675 |
+
type_order = {"common": 0, "unknown": 1, "theonym": 2, "ethnic": 3,
|
| 676 |
+
"proper": 4, "toponym": 5, "anthroponym": 6}
|
| 677 |
+
sorted_entries = sorted(
|
| 678 |
+
all_words.values(),
|
| 679 |
+
key=lambda e: (type_order.get(e["Word_Type"], 9), e["Transliteration"]),
|
| 680 |
+
)
|
| 681 |
+
|
| 682 |
+
with open(tsv_path, "w", encoding="utf-8", newline="") as f:
|
| 683 |
+
writer = csv.DictWriter(f, fieldnames=cols, delimiter="\t",
|
| 684 |
+
extrasaction="ignore")
|
| 685 |
+
writer.writeheader()
|
| 686 |
+
for entry in sorted_entries:
|
| 687 |
+
writer.writerow({
|
| 688 |
+
"Word": entry["Transliteration"],
|
| 689 |
+
"IPA": entry["IPA"],
|
| 690 |
+
"SCA": entry["SCA"],
|
| 691 |
+
"Source": entry["Source"],
|
| 692 |
+
"Concept_ID": entry["Concept_ID"],
|
| 693 |
+
"Cognate_Set_ID": entry["Cognate_Set_ID"],
|
| 694 |
+
"Gloss": entry["Gloss"],
|
| 695 |
+
"Word_Type": entry["Word_Type"],
|
| 696 |
+
"IPA_Source": entry.get("IPA_Source", "unknown"),
|
| 697 |
+
})
|
| 698 |
+
|
| 699 |
+
# Statistics
|
| 700 |
+
total = len(sorted_entries)
|
| 701 |
+
common = sum(1 for e in sorted_entries if e["Word_Type"] == "common")
|
| 702 |
+
proper = total - common
|
| 703 |
+
with_expert_ipa = sum(1 for e in sorted_entries if e.get("IPA_Source") == "expert")
|
| 704 |
+
with_translit_ipa = sum(1 for e in sorted_entries if e.get("IPA_Source") == "translit_conversion")
|
| 705 |
+
|
| 706 |
+
print(f"\n Words TSV: {total} entries → {tsv_path}")
|
| 707 |
+
print(f" Common nouns: {common}")
|
| 708 |
+
print(f" Proper nouns (names/places): {proper}")
|
| 709 |
+
print(f" IPA from expert sources: {with_expert_ipa}")
|
| 710 |
+
print(f" IPA from transliteration conversion: {with_translit_ipa}")
|
| 711 |
+
print(f" No IPA: {total - with_expert_ipa - with_translit_ipa}")
|
| 712 |
+
|
| 713 |
+
# Source distribution
|
| 714 |
+
src_counts = {}
|
| 715 |
+
for e in sorted_entries:
|
| 716 |
+
s = e["Source"]
|
| 717 |
+
src_counts[s] = src_counts.get(s, 0) + 1
|
| 718 |
+
print(f"\n Source distribution:")
|
| 719 |
+
for src, count in sorted(src_counts.items(), key=lambda x: -x[1]):
|
| 720 |
+
print(f" {src}: {count}")
|
| 721 |
+
|
| 722 |
+
return sorted_entries
|
| 723 |
+
|
| 724 |
+
|
| 725 |
+
def main():
|
| 726 |
+
print("=" * 70)
|
| 727 |
+
print("LINEAR B DATASET BUILD")
|
| 728 |
+
print("=" * 70)
|
| 729 |
+
|
| 730 |
+
# 1. Parse Unicode sign inventory
|
| 731 |
+
print("\n[1/4] Parsing Unicode UCD for Linear B signs...")
|
| 732 |
+
ucd_path = RAW_DIR / "UnicodeData.txt"
|
| 733 |
+
if not ucd_path.exists():
|
| 734 |
+
print(" ERROR: UnicodeData.txt not found. Run ingest_linear_b.py first.")
|
| 735 |
+
sys.exit(1)
|
| 736 |
+
signs = parse_unicode_signs(ucd_path)
|
| 737 |
+
build_sign_inventory(signs)
|
| 738 |
+
|
| 739 |
+
# 2. Parse Shannon lexicon
|
| 740 |
+
print("\n[2/4] Parsing Shannon Linear B Lexicon...")
|
| 741 |
+
shannon_path = RAW_DIR / "shannon_Linear_B_Lexicon.csv"
|
| 742 |
+
if not shannon_path.exists():
|
| 743 |
+
print(" ERROR: shannon_Linear_B_Lexicon.csv not found. Run ingest_linear_b.py first.")
|
| 744 |
+
sys.exit(1)
|
| 745 |
+
shannon_entries = parse_shannon_lexicon(shannon_path)
|
| 746 |
+
print(f" Parsed {len(shannon_entries)} entries")
|
| 747 |
+
type_counts = {}
|
| 748 |
+
for e in shannon_entries:
|
| 749 |
+
type_counts[e["Word_Type"]] = type_counts.get(e["Word_Type"], 0) + 1
|
| 750 |
+
for wt, c in sorted(type_counts.items(), key=lambda x: -x[1]):
|
| 751 |
+
print(f" {wt}: {c}")
|
| 752 |
+
|
| 753 |
+
# 3. Parse Wiktionary lemmas
|
| 754 |
+
print("\n[3/4] Parsing Wiktionary Mycenaean Greek lemmas...")
|
| 755 |
+
wikt_path = RAW_DIR / "wiktionary_gmy_lemmas.json"
|
| 756 |
+
if not wikt_path.exists():
|
| 757 |
+
print(" ERROR: wiktionary_gmy_lemmas.json not found. Run ingest_linear_b.py first.")
|
| 758 |
+
sys.exit(1)
|
| 759 |
+
wiktionary_entries = parse_wiktionary_lemmas(wikt_path)
|
| 760 |
+
print(f" Parsed {len(wiktionary_entries)} entries")
|
| 761 |
+
with_ipa = sum(1 for e in wiktionary_entries if e["IPA"])
|
| 762 |
+
with_translit = sum(1 for e in wiktionary_entries if e["Transliteration"])
|
| 763 |
+
with_gloss = sum(1 for e in wiktionary_entries if e["Gloss"] != "-")
|
| 764 |
+
print(f" With IPA (ts=): {with_ipa}")
|
| 765 |
+
print(f" With transliteration: {with_translit}")
|
| 766 |
+
print(f" With gloss: {with_gloss}")
|
| 767 |
+
|
| 768 |
+
# 4. Load IE-CoR existing data
|
| 769 |
+
print("\n[4/4] Loading IE-CoR Mycenaean Greek data...")
|
| 770 |
+
iecor_entries = load_iecor_gmy_words()
|
| 771 |
+
print(f" Loaded {len(iecor_entries)} entries from cognate pairs")
|
| 772 |
+
|
| 773 |
+
# 5. Merge and build word list
|
| 774 |
+
print("\n[BUILD] Merging all sources...")
|
| 775 |
+
entries = build_word_list(shannon_entries, wiktionary_entries, iecor_entries)
|
| 776 |
+
|
| 777 |
+
print("\n" + "=" * 70)
|
| 778 |
+
print("BUILD COMPLETE")
|
| 779 |
+
print("=" * 70)
|
| 780 |
+
print(f"\nOutput directory: {OUT_DIR}")
|
| 781 |
+
for p in sorted(OUT_DIR.iterdir()):
|
| 782 |
+
print(f" {p.name}: {p.stat().st_size:,} bytes")
|
| 783 |
+
|
| 784 |
+
|
| 785 |
+
if __name__ == "__main__":
|
| 786 |
+
main()
|
scripts/ingest_linear_b.py
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""Ingest Linear B (Mycenaean Greek) data from CC-BY-SA compatible sources.
|
| 3 |
+
|
| 4 |
+
Sources:
|
| 5 |
+
1. Unicode UCD — Sign inventory (88 syllabograms + 123 ideograms)
|
| 6 |
+
URL: https://www.unicode.org/Public/UCD/latest/ucd/UnicodeData.txt
|
| 7 |
+
License: Unicode Terms (permissive, CC-BY-SA-4.0 compatible)
|
| 8 |
+
|
| 9 |
+
2. Wiktionary — Mycenaean Greek lemmas (~435 entries)
|
| 10 |
+
URL: https://en.wiktionary.org/w/api.php (MediaWiki API)
|
| 11 |
+
License: CC-BY-SA-3.0+
|
| 12 |
+
|
| 13 |
+
3. jhnwnstd/shannon — Linear B Lexicon (2,747 entries, MIT license)
|
| 14 |
+
URL: https://raw.githubusercontent.com/jhnwnstd/shannon/main/Linear_B_Lexicon.csv
|
| 15 |
+
License: MIT
|
| 16 |
+
|
| 17 |
+
Iron Rule: Data comes from downloaded files/API responses. No hardcoded word lists.
|
| 18 |
+
|
| 19 |
+
Usage:
|
| 20 |
+
python scripts/ingest_linear_b.py [--dry-run]
|
| 21 |
+
"""
|
| 22 |
+
|
| 23 |
+
from __future__ import annotations
|
| 24 |
+
|
| 25 |
+
import argparse
|
| 26 |
+
import csv
|
| 27 |
+
import io
|
| 28 |
+
import json
|
| 29 |
+
import logging
|
| 30 |
+
import os
|
| 31 |
+
import re
|
| 32 |
+
import sys
|
| 33 |
+
import time
|
| 34 |
+
import urllib.error
|
| 35 |
+
import urllib.parse
|
| 36 |
+
import urllib.request
|
| 37 |
+
from pathlib import Path
|
| 38 |
+
|
| 39 |
+
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding="utf-8")
|
| 40 |
+
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding="utf-8")
|
| 41 |
+
|
| 42 |
+
ROOT = Path(__file__).resolve().parent.parent
|
| 43 |
+
|
| 44 |
+
logger = logging.getLogger(__name__)
|
| 45 |
+
|
| 46 |
+
RAW_DIR = ROOT / "data" / "training" / "raw" / "linear_b"
|
| 47 |
+
|
| 48 |
+
# ── Source URLs ──
|
| 49 |
+
|
| 50 |
+
UNICODE_DATA_URL = "https://www.unicode.org/Public/UCD/latest/ucd/UnicodeData.txt"
|
| 51 |
+
|
| 52 |
+
WIKTIONARY_API = "https://en.wiktionary.org/w/api.php"
|
| 53 |
+
|
| 54 |
+
SHANNON_LEXICON_URL = (
|
| 55 |
+
"https://raw.githubusercontent.com/jhnwnstd/shannon/main/Linear_B_Lexicon.csv"
|
| 56 |
+
)
|
| 57 |
+
|
| 58 |
+
# Linear B Unicode ranges
|
| 59 |
+
LINB_SYLLABARY_START = 0x10000
|
| 60 |
+
LINB_SYLLABARY_END = 0x1007F
|
| 61 |
+
LINB_IDEOGRAM_START = 0x10080
|
| 62 |
+
LINB_IDEOGRAM_END = 0x100FF
|
| 63 |
+
|
| 64 |
+
|
| 65 |
+
def download_file(url: str, dest: Path, label: str) -> bool:
|
| 66 |
+
"""Download a file, skipping if already present and non-empty."""
|
| 67 |
+
if dest.exists() and dest.stat().st_size > 0:
|
| 68 |
+
logger.info(f" {label}: already exists ({dest.stat().st_size:,} bytes)")
|
| 69 |
+
return True
|
| 70 |
+
logger.info(f" {label}: downloading from {url}")
|
| 71 |
+
try:
|
| 72 |
+
req = urllib.request.Request(url, headers={"User-Agent": "LinearB-Ingestion/1.0"})
|
| 73 |
+
with urllib.request.urlopen(req, timeout=60) as resp:
|
| 74 |
+
data = resp.read()
|
| 75 |
+
dest.parent.mkdir(parents=True, exist_ok=True)
|
| 76 |
+
dest.write_bytes(data)
|
| 77 |
+
logger.info(f" {label}: downloaded {len(data):,} bytes")
|
| 78 |
+
return True
|
| 79 |
+
except (urllib.error.URLError, urllib.error.HTTPError, OSError) as e:
|
| 80 |
+
logger.error(f" {label}: DOWNLOAD FAILED — {e}")
|
| 81 |
+
return False
|
| 82 |
+
|
| 83 |
+
|
| 84 |
+
def download_unicode_data(dry_run: bool = False) -> Path:
|
| 85 |
+
"""Download UnicodeData.txt."""
|
| 86 |
+
dest = RAW_DIR / "UnicodeData.txt"
|
| 87 |
+
if dry_run:
|
| 88 |
+
logger.info(f" [DRY RUN] Would download {UNICODE_DATA_URL}")
|
| 89 |
+
return dest
|
| 90 |
+
download_file(UNICODE_DATA_URL, dest, "UnicodeData.txt")
|
| 91 |
+
return dest
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
def download_shannon_lexicon(dry_run: bool = False) -> Path:
|
| 95 |
+
"""Download jhnwnstd/shannon Linear_B_Lexicon.csv."""
|
| 96 |
+
dest = RAW_DIR / "shannon_Linear_B_Lexicon.csv"
|
| 97 |
+
if dry_run:
|
| 98 |
+
logger.info(f" [DRY RUN] Would download {SHANNON_LEXICON_URL}")
|
| 99 |
+
return dest
|
| 100 |
+
download_file(SHANNON_LEXICON_URL, dest, "shannon_lexicon")
|
| 101 |
+
return dest
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
def download_wiktionary_lemmas(dry_run: bool = False) -> Path:
|
| 105 |
+
"""Download all Mycenaean Greek lemmas from Wiktionary API."""
|
| 106 |
+
dest = RAW_DIR / "wiktionary_gmy_lemmas.json"
|
| 107 |
+
if dest.exists() and dest.stat().st_size > 0:
|
| 108 |
+
logger.info(f" wiktionary: already exists ({dest.stat().st_size:,} bytes)")
|
| 109 |
+
return dest
|
| 110 |
+
if dry_run:
|
| 111 |
+
logger.info(f" [DRY RUN] Would fetch Wiktionary gmy lemmas")
|
| 112 |
+
return dest
|
| 113 |
+
|
| 114 |
+
all_titles = []
|
| 115 |
+
cmcontinue = None
|
| 116 |
+
page = 0
|
| 117 |
+
|
| 118 |
+
while True:
|
| 119 |
+
page += 1
|
| 120 |
+
params = {
|
| 121 |
+
"action": "query",
|
| 122 |
+
"list": "categorymembers",
|
| 123 |
+
"cmtitle": "Category:Mycenaean_Greek_lemmas",
|
| 124 |
+
"cmlimit": "500",
|
| 125 |
+
"format": "json",
|
| 126 |
+
}
|
| 127 |
+
if cmcontinue:
|
| 128 |
+
params["cmcontinue"] = cmcontinue
|
| 129 |
+
|
| 130 |
+
url = f"{WIKTIONARY_API}?{urllib.parse.urlencode(params)}"
|
| 131 |
+
logger.info(f" wiktionary: page {page}, {len(all_titles)} titles so far...")
|
| 132 |
+
|
| 133 |
+
req = urllib.request.Request(url, headers={"User-Agent": "LinearB-Ingestion/1.0"})
|
| 134 |
+
with urllib.request.urlopen(req, timeout=30) as resp:
|
| 135 |
+
data = json.loads(resp.read().decode("utf-8"))
|
| 136 |
+
|
| 137 |
+
members = data.get("query", {}).get("categorymembers", [])
|
| 138 |
+
for m in members:
|
| 139 |
+
all_titles.append(m["title"])
|
| 140 |
+
|
| 141 |
+
# Check for continuation
|
| 142 |
+
cont = data.get("continue", {})
|
| 143 |
+
if "cmcontinue" in cont:
|
| 144 |
+
cmcontinue = cont["cmcontinue"]
|
| 145 |
+
time.sleep(0.5) # Be polite to Wiktionary
|
| 146 |
+
else:
|
| 147 |
+
break
|
| 148 |
+
|
| 149 |
+
logger.info(f" wiktionary: fetched {len(all_titles)} lemma titles")
|
| 150 |
+
|
| 151 |
+
# Now fetch content for each lemma (in batches of 50)
|
| 152 |
+
lemma_data = []
|
| 153 |
+
batch_size = 50
|
| 154 |
+
for i in range(0, len(all_titles), batch_size):
|
| 155 |
+
batch = all_titles[i : i + batch_size]
|
| 156 |
+
titles_param = "|".join(batch)
|
| 157 |
+
params = {
|
| 158 |
+
"action": "query",
|
| 159 |
+
"titles": titles_param,
|
| 160 |
+
"prop": "revisions",
|
| 161 |
+
"rvprop": "content",
|
| 162 |
+
"rvslots": "main",
|
| 163 |
+
"format": "json",
|
| 164 |
+
}
|
| 165 |
+
url = f"{WIKTIONARY_API}?{urllib.parse.urlencode(params)}"
|
| 166 |
+
req = urllib.request.Request(url, headers={"User-Agent": "LinearB-Ingestion/1.0"})
|
| 167 |
+
|
| 168 |
+
try:
|
| 169 |
+
with urllib.request.urlopen(req, timeout=30) as resp:
|
| 170 |
+
data = json.loads(resp.read().decode("utf-8"))
|
| 171 |
+
|
| 172 |
+
pages = data.get("query", {}).get("pages", {})
|
| 173 |
+
for page_id, page_data in pages.items():
|
| 174 |
+
title = page_data.get("title", "")
|
| 175 |
+
revisions = page_data.get("revisions", [])
|
| 176 |
+
if revisions:
|
| 177 |
+
content = revisions[0].get("slots", {}).get("main", {}).get("*", "")
|
| 178 |
+
lemma_data.append({"title": title, "wikitext": content})
|
| 179 |
+
except Exception as e:
|
| 180 |
+
logger.warning(f" wiktionary batch {i//batch_size}: {e}")
|
| 181 |
+
|
| 182 |
+
if i + batch_size < len(all_titles):
|
| 183 |
+
time.sleep(1.0) # Rate limit
|
| 184 |
+
|
| 185 |
+
dest.parent.mkdir(parents=True, exist_ok=True)
|
| 186 |
+
dest.write_text(json.dumps(lemma_data, ensure_ascii=False, indent=2), encoding="utf-8")
|
| 187 |
+
logger.info(f" wiktionary: saved {len(lemma_data)} lemma entries to {dest}")
|
| 188 |
+
return dest
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
def download_wiktionary_swadesh(dry_run: bool = False) -> Path:
|
| 192 |
+
"""Download Mycenaean Greek Swadesh list from Wiktionary."""
|
| 193 |
+
dest = RAW_DIR / "wiktionary_gmy_swadesh.json"
|
| 194 |
+
if dest.exists() and dest.stat().st_size > 0:
|
| 195 |
+
logger.info(f" swadesh: already exists ({dest.stat().st_size:,} bytes)")
|
| 196 |
+
return dest
|
| 197 |
+
if dry_run:
|
| 198 |
+
logger.info(f" [DRY RUN] Would fetch Wiktionary gmy Swadesh list")
|
| 199 |
+
return dest
|
| 200 |
+
|
| 201 |
+
params = {
|
| 202 |
+
"action": "query",
|
| 203 |
+
"titles": "Appendix:Mycenaean_Greek_Swadesh_list",
|
| 204 |
+
"prop": "revisions",
|
| 205 |
+
"rvprop": "content",
|
| 206 |
+
"rvslots": "main",
|
| 207 |
+
"format": "json",
|
| 208 |
+
}
|
| 209 |
+
url = f"{WIKTIONARY_API}?{urllib.parse.urlencode(params)}"
|
| 210 |
+
req = urllib.request.Request(url, headers={"User-Agent": "LinearB-Ingestion/1.0"})
|
| 211 |
+
|
| 212 |
+
with urllib.request.urlopen(req, timeout=30) as resp:
|
| 213 |
+
data = json.loads(resp.read().decode("utf-8"))
|
| 214 |
+
|
| 215 |
+
pages = data.get("query", {}).get("pages", {})
|
| 216 |
+
for page_id, page_data in pages.items():
|
| 217 |
+
content = page_data.get("revisions", [{}])[0].get("slots", {}).get("main", {}).get("*", "")
|
| 218 |
+
|
| 219 |
+
dest.parent.mkdir(parents=True, exist_ok=True)
|
| 220 |
+
dest.write_text(json.dumps({"title": "Mycenaean_Greek_Swadesh_list", "wikitext": content},
|
| 221 |
+
ensure_ascii=False, indent=2), encoding="utf-8")
|
| 222 |
+
logger.info(f" swadesh: saved to {dest}")
|
| 223 |
+
return dest
|
| 224 |
+
|
| 225 |
+
|
| 226 |
+
def main():
|
| 227 |
+
parser = argparse.ArgumentParser(description="Ingest Linear B data from open sources")
|
| 228 |
+
parser.add_argument("--dry-run", action="store_true", help="Show what would be downloaded")
|
| 229 |
+
args = parser.parse_args()
|
| 230 |
+
|
| 231 |
+
logging.basicConfig(
|
| 232 |
+
level=logging.INFO,
|
| 233 |
+
format="%(asctime)s %(levelname)s %(message)s",
|
| 234 |
+
datefmt="%H:%M:%S",
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
RAW_DIR.mkdir(parents=True, exist_ok=True)
|
| 238 |
+
|
| 239 |
+
print("=" * 70)
|
| 240 |
+
print("LINEAR B DATA INGESTION")
|
| 241 |
+
print("=" * 70)
|
| 242 |
+
|
| 243 |
+
# 1. Unicode Data
|
| 244 |
+
print("\n[1/4] Unicode UCD (sign inventory)")
|
| 245 |
+
ucd_path = download_unicode_data(args.dry_run)
|
| 246 |
+
|
| 247 |
+
# 2. Shannon lexicon
|
| 248 |
+
print("\n[2/4] jhnwnstd/shannon Linear B Lexicon (MIT)")
|
| 249 |
+
shannon_path = download_shannon_lexicon(args.dry_run)
|
| 250 |
+
|
| 251 |
+
# 3. Wiktionary lemmas
|
| 252 |
+
print("\n[3/4] Wiktionary Mycenaean Greek lemmas (CC-BY-SA)")
|
| 253 |
+
wikt_path = download_wiktionary_lemmas(args.dry_run)
|
| 254 |
+
|
| 255 |
+
# 4. Wiktionary Swadesh list
|
| 256 |
+
print("\n[4/4] Wiktionary Mycenaean Greek Swadesh list")
|
| 257 |
+
swadesh_path = download_wiktionary_swadesh(args.dry_run)
|
| 258 |
+
|
| 259 |
+
print("\n" + "=" * 70)
|
| 260 |
+
print("INGESTION COMPLETE")
|
| 261 |
+
print("=" * 70)
|
| 262 |
+
|
| 263 |
+
# Verify files
|
| 264 |
+
for label, path in [
|
| 265 |
+
("UnicodeData.txt", ucd_path),
|
| 266 |
+
("Shannon Lexicon", shannon_path),
|
| 267 |
+
("Wiktionary Lemmas", wikt_path),
|
| 268 |
+
("Wiktionary Swadesh", swadesh_path),
|
| 269 |
+
]:
|
| 270 |
+
if path.exists():
|
| 271 |
+
size = path.stat().st_size
|
| 272 |
+
print(f" {label}: {path.name} ({size:,} bytes)")
|
| 273 |
+
else:
|
| 274 |
+
print(f" {label}: NOT DOWNLOADED")
|
| 275 |
+
|
| 276 |
+
print(f"\nAll raw data saved to: {RAW_DIR}")
|
| 277 |
+
|
| 278 |
+
|
| 279 |
+
if __name__ == "__main__":
|
| 280 |
+
main()
|