Edit model card

TS-Corpus WordPiece Tokenizer (32k, Uncased)

Overview

This repository contains a WordPiece tokenizer with a vocabulary size of 32,000, trained uncased on various datasets from the TS Corpus website. It is designed to handle Turkish text, leveraging rich and diverse sources to provide a robust tool for natural language processing tasks.

Dataset Sources

The tokenizer was trained using multiple corpora from the TS Corpus, specifically:

These diverse sources include a wide range of texts from encyclopedic articles to legal documents, providing a comprehensive linguistic foundation for the tokenizer.

Tokenizer Model

The tokenizer uses the WordPiece model, which is widely utilized in many modern NLP systems. It is particularly effective in handling languages with rich morphology like Turkish due to its subword segmentation approach. This tokenizer does not differentiate between uppercase and lowercase letters, ensuring uniformity in tokenization regardless of text casing.

Usage

To use this tokenizer, you can load it via the Hugging Face transformers library as follows:

from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tahaenesaslanturk/ts-corpus-wordpiece-32k-uncased")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .