The dataset viewer is not available for this split.
The size of the content of the first rows (72550441 B) exceeds the maximum supported size (200000 B) even after truncation. Please report the issue.
Error code:   TooBigContentError

Need help to make the dataset viewer work? Open a discussion for direct support.


Frontend-only live semantic search with transformers.js

This is the HF data repo for indexed texts, ready-to-import in SemanticFinder. The files contain the original text, text chunks and their embeddings.


filesize textTitle textAuthor textYear textLanguage URL modelName quantized splitParam splitType characters chunks wordsToAvoidAll wordsToCheckAll wordsToAvoidAny wordsToCheckAny exportDecimals lines textNotes textSourceURL filename
100.96 Collection of 100 books Various Authors 1890 en Xenova/bge-small-en-v1.5 True 100 Words 55705582 158957 2 1085035 US Public Domain Books (English) Collection_of_100_books_dd80b04b.json.gz
4.78 Das Kapital Karl Marx 1867 de Xenova/multilingual-e5-small True 80 Words 2003807 3164 5 28673 Das_Kapital_c1a84fba.json.gz
2.58 Divina Commedia Dante 1321 it Xenova/multilingual-e5-base True 50 Words 383782 1179 5 6225 Divina_Commedia_d5a0fa67.json.gz
11.92 Don Quijote Miguel de Cervantes 1605 es Xenova/multilingual-e5-base True 25 Words 1047150 7186 4 12005 Don_Quijote_14a0b44.json.gz
0.06 Hansel and Gretel Brothers Grimm 1812 en TaylorAI/gte-tiny True 100 Chars 5304 55 5 9 Hansel_and_Gretel_4de079eb.json.gz
13.52 Iliad Homer -750 gr Xenova/multilingual-e5-small True 20 Words 1597139 11848 5 32659 Including modern interpretation Iliad_8de5d1ea.json.gz
1.74 IPCC Report 2023 IPCC 2023 en Supabase/bge-small-en True 200 Chars 307811 1566 5 3230 state of knowledge of climate change IPCC_Report_2023_2b260928.json.gz
25.56 King James Bible None en TaylorAI/gte-tiny True 200 Chars 4556163 23056 5 80496 King_James_Bible_24f6dc4c.json.gz
11.45 King James Bible None en TaylorAI/gte-tiny True 200 Chars 4556163 23056 2 80496 King_James_Bible_6434a78d.json.gz
39.32 Les Misérables Victor Hugo 1862 frérables_2239df51 Xenova/multilingual-e5-base True 25 Words 3236941 19463 5 74491 All five acts included Les_Misérables_2239df51.json.gz
8.67 List of the Most Common English Words Dolph 2012 en Xenova/bge-small-en-v1.5 True \n Regex 210518 25322 2 25323 GitHub Repo List_of_the_Most_Common_English_Words_0d1e28dc.json.gz
15.61 List of the Most Common English Words Dolph 2012 en Xenova/multilingual-e5-base True \n Regex 210518 25322 2 25323 GitHub Repo List_of_the_Most_Common_English_Words_70320cde.json.gz
0.46 REGULATION (EU) 2023/138 European Commission 2022 en Supabase/bge-small-en True 25 Words 76809 424 5 1323 REGULATION_(EU)_2023_138_c00e7ff6.json.gz
0.07 Universal Declaration of Human Rights United Nations 1948 en TaylorAI/gte-tiny True \nArticle Regex 8623 63 5 109 30 articles Universal_Declaration_of_Human_Rights_0a7da79a.json.gz


Once loaded in SemanticFinder it takes around 2 seconds to search through the whole bible! Try it out.

  1. Click on one of the example URLs of your choice.
  2. Once the index loaded, simply enter something you want to search for and hit "Find". The results will appear almost instantly.

Create SemanticFinder files

  1. Just use SemanticFinder as usual and run at least one search so that the index is created. This might take a while if your input is large. E.g. indexing the bible with 200 chars results in ~23k embeddings and takes 15-30 mins with a quantized gte-tiny model.
  2. Add the metadata (so other people can find your index) and export the file. Note that you have the freedom to reduce decimals to reduce file size; usually 3 is more than enough depending on the model. Less than 3 will also do in most cases but if you need highest accuracy go with 5 or more.
  3. Create a PR here if you want to see it added in the official collection! Just make sure to run once to update the csv/md file. For now, the table here needs to be updated with the manually.


  • This repo is public and shares documents of public interest or documents in the public domain.
  • If you have sensitive documents you can still create the index with SemanticFinder and use it only locally. Either you can load the index from disk each time or you host it in your local network and add the URL in SemanticFinder.

Use cases

Standard use case

Search for most similar words/sentences/paragraphs/pages in any text. Just imagine CTRL+F could find related words and not only the exact same one you used! If you're working on the same text repeatedly you can save the index and reuse it.

Also, there is the option of summarizing the results with generative AI like Qwen models right in your browser or connecting a heavy Llama2 instance with Ollama.

Advanced use cases

  • Translate words with multilingual embeddings or see which words out of a given list are most similar to your input word. Using e.g. the index of ~30k English words you can use more than 100 input languages to query! Note that here the expert settings change so that only the first match is displayed.
  • English synonym finder, using again the index of ~30k English words but with slightly better (and smaller) English-only embeddings. Same expert settings here.
  • The universal index idea, i.e. use the 30k English words index and do not inference for any new words. In this way you can perform instant semantic search on unknown / unseen / not indexed texts! Use this URL and add then copy and paste any text of your choice into the text field. Inferencing any new words is turned off for speed gains.
  • A hybrid version of the universal index where you use the 30k English words as start index but then "fill up" with all the additional words the index doesn't know yet. For this option just use this URL where the inferencing is turned on again. This yields best results and might be a good compromise assuming that new texts generally don't have that many new words. Even if it's a couple of hundreds (like in a particular research paper in a niche domain) inferencing is quite fast.

If you have any feedback/ideas/feature requests please open an issue or create a PR in the GitHub repo.

⭐Stars very welcome to spread the word and democratize semantic search!⭐

Downloads last month
Edit dataset card