Edit model card

TS-Corpus BPE Tokenizer (32k, Cased)

Overview

This repository hosts a Byte Pair Encoding (BPE) tokenizer with a vocabulary size of 32,000, trained uncased using several datasets from the TS Corpus website. The BPE method is particularly effective for languages like Turkish, providing a balance between word-level and character-level tokenization.

Dataset Sources

The tokenizer was trained on a variety of text sources from TS Corpus, ensuring a broad linguistic coverage. These sources include:

The inclusion of idiomatic expressions, proverbs, and legal terminology provides a comprehensive toolkit for processing Turkish text across different domains.

Tokenizer Model

Utilizing the Byte Pair Encoding (BPE) method, this tokenizer excels in efficiently managing subword units without the need for an extensive vocabulary. BPE is especially suitable for handling the agglutinative nature of Turkish, where words can have multiple suffixes.

Usage

To use this tokenizer in your projects, load it with the Hugging Face transformers library:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tahaenesaslanturk/ts-corpus-bpe-32k-cased")
Downloads last month
0
Unable to determine this model's library. Check the docs .