Edit model card


Pre-trained vectors trained on a part of the Google News dataset (about 100 billion words). The model contains 300-dimensional vectors for 3 million words and phrases. The phrases were obtained using a simple data-driven approach described in 'Distributed Representations of Words and Phrases and their Compositionality'

Read more:

Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .