simonschoe commited on
Commit
a57c3ee
1 Parent(s): d1740d9

update README

Browse files
Files changed (1) hide show
  1. README.md +21 -4
README.md CHANGED
@@ -54,18 +54,18 @@ def query(payload):
54
  query({"inputs": "<insert-query-here"})
55
  ```
56
 
 
57
  ## Usage (Gensim)
58
 
59
  ```python
60
- from huggingface_hub import hf_hub_url, cached_download
61
  from gensim.models.fasttext import load_facebook_model
62
 
63
  # download model from huggingface hub
64
- url = hf_hub_url(repo_id="simonschoe/call2vec", filename="model.bin")
65
- cached_download(url)
66
 
67
  # load model via gensim
68
- model = load_facebook_model(<PATH-MODEL>)
69
 
70
  # extract word embeddings
71
  model.wv['transformation']
@@ -77,6 +77,23 @@ model.wv.most_similar(negative='transformation', topn=5, restrict_vocab=None)
77
  model.wv.similarity('transformation', 'continuity')
78
  ```
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  If model size is crucial, the final model could be additionally compressed using the [`compress-fasttext`](https://github.com/avidale/compress-fasttext) library (e.g., via pruning, conversion to `float16`, or product quantization).
81
 
82
 
 
54
  query({"inputs": "<insert-query-here"})
55
  ```
56
 
57
+
58
  ## Usage (Gensim)
59
 
60
  ```python
61
+ from huggingface_hub import hf_hub_download
62
  from gensim.models.fasttext import load_facebook_model
63
 
64
  # download model from huggingface hub
65
+ model_path = hf_hub_download(repo_id="simonschoe/call2vec", filename="model.bin")
 
66
 
67
  # load model via gensim
68
+ model = load_facebook_model(model_path)
69
 
70
  # extract word embeddings
71
  model.wv['transformation']
 
77
  model.wv.similarity('transformation', 'continuity')
78
  ```
79
 
80
+ ## Usage (fasttext)
81
+
82
+ ```python
83
+ import fasttext
84
+ from huggingface_hub import hf_hub_download
85
+
86
+ # download model from huggingface hub
87
+ model_path = hf_hub_download(repo_id="simonschoe/call2vec", filename="model.bin")
88
+
89
+ # load model via fasttext
90
+ model = fasttext.load_model(model_path)
91
+
92
+ # get similar phrases
93
+ model.get_nearest_neighbors("transformation", k=5)
94
+ ```
95
+
96
+
97
  If model size is crucial, the final model could be additionally compressed using the [`compress-fasttext`](https://github.com/avidale/compress-fasttext) library (e.g., via pruning, conversion to `float16`, or product quantization).
98
 
99