Edit model card

This Word2Vec model was trained on a subset of the English Wikipedia (enwik8), comprising the first 100,000,000 bytes of plain text. The model has been fine-tuned to capture semantic word relationships and is particularly useful for natural language processing (NLP) tasks, including word similarity, analogy detection, and text generation.

Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .