English
word2vec
File size: 1,447 Bytes
83802d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
language: eng
license: cc-by-4.0
tags:
- word2vec
datasets: Gigaword_5th_Edition
---

## Information
A word2vec model trained by Andrey Kutuzov (andreku@ifi.uio.no) on a vocabulary of size 292967 corresponding to 4815382730 tokens from the dataset `Gigaword_5th_Edition`.
The model is trained with the following properties: no lemmatization and postag with the algorith fastText Skipgram with window of 5 and dimension of 300.

## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/nlpl_16", filename="model.bin"), binary=True, unicode_errors="ignore")
```

## Citation
Fares, Murhaf; Kutuzov, Andrei; Oepen, Stephan & Velldal, Erik (2017). Word vectors, reuse, and replicability: Towards a community repository of large-text resources, In Jörg Tiedemann (ed.), Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 22-24 May 2017. Linköping University Electronic Press. ISBN 978-91-7685-601-7

This archive is part of the NLPL Word Vectors Repository (http://vectors.nlpl.eu/repository/), version 2.0, published on Friday, December 27, 2019.
Please see the file 'meta.json' in this archive and the overall repository metadata file http://vectors.nlpl.eu/repository/20.json for additional information.
The life-time identifier for this model is: http://vectors.nlpl.eu/repository/20/16.zip