The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/fittar/visually_grounded_embeddings/visually_grounded_embeddings.py or any data file in the same directory. Couldn't find 'fittar/visually_grounded_embeddings' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in fittar/visually_grounded_embeddings. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/fittar/visually_grounded_embeddings/visually_grounded_embeddings.py or any data file in the same directory. Couldn't find 'fittar/visually_grounded_embeddings' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in fittar/visually_grounded_embeddings.

Need help to make the dataset viewer work? Open a discussion for direct support.

Visually Grounded embeddings for Fast-text and GloVe

This repository contains multiple visually grounded word embedding models. All of these embeddings have been effectively infused with visual information from images.
They have been proven to show stronger correlations (compared to textual embeddings) to human judgments on various word similarities and relatedness benchmarks.

Usage

All of the models are encoded in gensim format. Loading the model:

import gensim

model_g = gensim.models.KeyedVectors.load_word2vec_format('path_to_embeddings' , binary=True)

#retrieve the most similar words
print(model_g.most_similar('together',topn=10))

[('togther', 0.6425853967666626), ('togehter', 0.6374243497848511), ('togeather', 0.6196791529655457),
('togather', 0.5998020172119141), ('togheter', 0.5819681882858276),('toghether', 0.5738174319267273), 
('2gether', 0.5187329053878784), ('togethor', 0.501663088798523), ('gether', 0.49128714203834534), 
('toegther', 0.48457157611846924)]

print(model_g.most_similar('sad',topn=10))

[('saddening', 0.6763913631439209), ('depressing', 0.6676110029220581), ('saddened', 0.6352651715278625),
('sorrowful', 0.6336953043937683), ('heartbreaking', 0.6180269122123718), ('heartbroken', 0.6099187135696411),
('tragic', 0.6039361953735352), ('pathetic', 0.5848405361175537), ('Sad', 0.5826965570449829),
('mournful', 0.5742306709289551)]

#find the outlier word
print(model_g.doesnt_match(['fire', 'water', 'land', 'sea', 'air', 'car']))

car

where 'path_to_embeddings' is the path to the embeddings you intend to use.

Which embeddings to use

Under the Files and Versions tab, you can see the list of 4 available embeddings.

The following embedding files are from the paper Learning Zero-Shot Multifaceted Visually Grounded Word Embeddings via Multi-Task Training:

  • v_glove_1024d_1.0
  • v_fasttext_1024d_1.0

The following embedding files are from the paper Language with Vision: a Study on Grounded Word and Sentence Embeddings:

  • v_glove_1024d_2.0
  • v_glove_300_d_2.0

All of them come with 1024-dimensional word vectors except v_glove_300_d_2.0 which contains 300-dimensional word vectors.

Downloads last month
0
Edit dataset card