metadata
language:
- tr
This is the embedded version of barandinho/wikipedia_tr the dataset was chunked (chunk_size=2048, chunk_overlap=256) and then put into an embedding model.
The embedding model used for this dataset is sentence-transformers/distiluse-base-multilingual-cased-v1 so you have to use it if you wanna do similarity search
It is one of the best embedding model for Turkish according to our tests.
Embedding dimension is 64 and values are int8
You can do similarity search with usearch below is an example for similarity search given a query.
#!pip install sentence-transformers datasets usearch
import numpy as np
from datasets import load_dataset
from usearch.index import Index
from sentence_transformers import SentenceTransformer
# Load dataset and corresponding embedding model
ds = load_dataset('barandinho/wikipedia_tr_embedded', split="train")
embd = SentenceTransformer("sentence-transformers/distiluse-base-multilingual-cased-v1", trust_remote_code=True)
# Get embeddings as a list to create usearch Index
dtype = np.int8
embeddings = [embedding for embedding in ds['embed_int8']]
embeddings = np.asarray(embeddings, dtype=dtype)
num_dim = 64
index = Index(ndim=num_dim, metric='cos')
index.add(np.arange(len(embeddings)), embeddings)
q = 'Fatih Sultan Mehmet' # quality of the query is very important for wanted results
q_embd = embd.encode(q, precision='binary')
q_embd = np.asarray(q_embd, dtype=dtype)
# Get top 3 results
matches = index.search(q_embd, 3)
for match in matches:
idx = int(match.key)
print(ds[idx]['title'])
print(ds[idx]['text'])
print("--"*10)