--- {} --- # {nemesis-gte-tiny} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is fine-tuned from [`TaylorAI/gte-tiny`](https://huggingface.co/TaylorAI/gte-tiny) on public documents processed through a [Nemesis](https://github.com/SpecterOps/Nemesis) pipeline. The ~2500 documents were chunked into 512 tokens and submitted to Gemini for Question/Answer generation. Each query generated 2 questions, and the entire process was executed twice, resulting in ~10k questions generated for context chunks. The positive chunks were linked to each qusetion and 5 random text chunks from documents _other_ than the source were used as the negative training samples. We followed the guide from [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/README.md) for fine tuning. The fine tuned model was merged back with the `TaylorAI/gte-tiny` base using [LM_Cocktail](https://github.com/FlagOpen/FlagEmbedding/tree/master/LM_Cocktail) as the guide described. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Harmj0y/nemesis-gte-tiny') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Harmj0y/nemesis-gte-tiny') model = AutoModel.from_pretrained('Harmj0y/nemesis-gte-tiny') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors Fine tuned from [TaylorAI/gte-tiny](https://huggingface.co/TaylorAI/gte-tiny/) using [FlagEmbedding](https://github.com/FlagOpen/FlagEmbedding/)'s [embedding fine-tuning guide](https://github.com/FlagOpen/FlagEmbedding/blob/master/examples/finetune/README.md).