Hierarchy Transformers (HiTs)'s profile picture

Hierarchy Transformers (HiTs)

university

AI & ML interests

This collection includes language models trained on hierarchies using hyperbolic losses. The resulting HiT models yield entity embeddings that are hierarchically organised in hyperbolic space.

Organization Card
About org cards

Hierarchy Transformers (HiTs) are capable of interpreting and encoding hierarchies explicitly.

The relevant code in HierarchyTransformers extends from Sentence-Transformers.

Get Started

Install hierarchy_tranformers (check our repository) through pip or GitHub.

Use the following code to get started with HiTs:

from hierarchy_transformers import HierarchyTransformer
from hierarchy_transformers.utils import get_torch_device

# set up the device (use cpu if no gpu found)
gpu_id = 0
device = get_torch_device(gpu_id)

# load the model
model = HierarchyTransformer.load_pretrained('Hierarchy-Transformers/HiT-MiniLM-L12-WordNet', device)

# entity names to be encoded.
entity_names = ["computer", "personal computer", "fruit", "berry"]

# get the entity embeddings
entity_embeddings = model.encode(entity_names)

Models

See available HiT models under this organisation.

Datasets

The datasets for training and evaluating HiTs are available at Zenodo.

Citation

Preprint on arxiv: https://arxiv.org/abs/2401.11374.

Yuan He, Zhangdie Yuan, Jiaoyan Chen, Ian Horrocks. Language Models as Hierarchy Encoders. arXiv preprint arXiv:2401.11374 (2024).

@article{he2024language,
  title={Language Models as Hierarchy Encoders},
  author={He, Yuan and Yuan, Zhangdie and Chen, Jiaoyan and Horrocks, Ian},
  journal={arXiv preprint arXiv:2401.11374},
  year={2024}
}

datasets

None public yet