๐ Traditional Chinese Examination Dataset ๐ค
This project is an collection of exam-related files (PDFs and MP3s) that can be used to train document/audio understanding models, evaluate datasets for indexing question-answer pairs, or OCR preprocessing.
๐๏ธ Project Structure
project/
โโโ data/ # Original exam files (PDFs, MP3s)
โโโ convert.py # Script to generate metadata
โโโ main.py # Script to load dataset with datasets
โโโ metadata.jsonl # Output metadata file
โโโ README.md # Project documentation
๐ ๏ธ Requirements
- Python 3.7+
- Install required packages:
pip install datasets
๐ Generate Metadata
Run convert.py
to extract metadata from filenames and generate a metadata.jsonl
file.
python convert.py
This will read all files in the data/
directory and output a line-delimited JSON file (metadata.jsonl
) describing each file with fields like:
{
"id": "01-1131-2-ไธๅ
ฌๆฐ-้ก็ฎ",
"serial": "01",
"grade": "ไธ",
"subject": "ๅ
ฌๆฐ",
"variant": null,
"type": "้ก็ฎ",
"path": "data/01-1131-2-ไธๅ
ฌๆฐ-้ก็ฎ.pdf",
"format": "pdf"
}
๐ฅ Metadata
Run main.py
to load the dataset using the Hugging Face datasets
library.
python main.py
This will load and print a few samples from your dataset.
๐ Dataset Fields
id
: Unique identifier from the filenameserial
: Serial code from the filenamegrade
: ไธ (1st year), ไบ (2nd year), ไธ (3rd year)subject
: e.g., ๅ ฌๆฐ, ่ฑๆ, ๆธๅญธvariant
: Optional (e.g., ้ซ, ้ณ) from parenthesestype
: ้ก็ฎ (questions), ็ญๆก (answers), ๆๅฏซๅท (handwritten)path
: File pathformat
: pdf or mp3
๐ง Tips
- You can use this dataset for training document/audio understanding models, indexing question-answer pairs, or OCR preprocessing.
- Extend
convert.py
to extract text from PDFs or audio features if needed. - Easily upload the dataset to the Hugging Face Hub via
datasets.Dataset.push_to_hub
.
๐ฌ Future Work
- PDF text extraction
- Audio transcription
- Tagging with more metadata (exam year, term, difficulty, etc.)
- Train-test splitting
๐ License
Apache 2.0 License
๐ค Contributions Welcome!
Feel free to fork, improve, and PR!
๐ Special Thanks
This project was made possible with the help of ChatGPT, an AI assistant by OpenAI, which provided:
- Ez filename parsing and metadata extraction strategy
- Dataset integration guidance
- Project structuring and documentation
- ๐ง Motivation to keep going when the files got messy
Thanks, AI buddy! ๐ค๐ก
- Downloads last month
- 15