--- license: mit language: - en pipeline_tag: feature-extraction tags: - e5-mistral-7b-instruct - mlx-llm - mlx - feature-extraction - embeddings library_name: mlx-llm --- # E5-mistral-7b-instruct [Improving Text Embeddings with Large Language Models](https://arxiv.org/pdf/2401.00368.pdf). Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024 This model has 32 layers and the embedding size is 4096. Check [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard). ## Model description Please, refer to the [original model card](https://huggingface.co/intfloat/e5-mistral-7b-instruct) for more details on E5-mistral-7b-instruct. ## Use with mlx-llm Download weights from files section and install `mlx-llm` from GitHub. ```bash git clone https://github.com/riccardomusmeci/mlx-llm cd mlx-llm pip install . ``` Run ```python import mlx.core as mx import numpy as np from mlx_llm.model import create_model from transformers import AutoTokenizer model = create_model( "e5-mistral-7b-instruct", weights_path="path/to/weights.npz", strict=False ) def get_detailed_instruct(task_description: str, query: str) -> str: return f'Instruct: {task_description}\nQuery: {query}' def last_token_pool(embeds: mx.array, attn_mask: mx.array) -> mx.array: left_padding = (attn_mask[:, -1].sum() == attn_mask.shape[0]) if left_padding: return embeds[:, -1] else: sequence_lengths = attn_mask.sum(axis=1) - 1 batch_size = embeds.shape[0] return embeds[mx.arange(batch_size), sequence_lengths] task = 'Given a web search query, retrieve relevant passages that answer the query' input_texts = [ get_detailed_instruct(task, 'how much protein should a female eat'), # get_detailed_instruct(task, 'summit define'), "As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments." ] tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-mistral-7b-instruct') # prepare input and attn_mask max_length = 4096 batch_dict = tokenizer( input_texts, max_length=max_length - 1, return_attention_mask=False, padding=False, truncation=True ) batch_dict['input_ids'] = [ input_ids + [tokenizer.eos_token_id] for input_ids in batch_dict['input_ids'] ] batch_dict = tokenizer.pad( batch_dict, padding=True, return_attention_mask=True, return_tensors='np' ) x = mx.array(batch_dict["input_ids"].tolist()) attn_mask = mx.array(batch_dict["attention_mask"].tolist()) # compute embed embeds = model.embed(x) mx.eval(embeds) embeds = np.array(last_token_pool(embeds, attn_mask)) # Normalize embeds norm_den = np.linalg.norm(embeds, axis=-1) norm_embeds = embeds / norm_den[:, None] scores = (norm_embeds @ norm_embeds.T) * 100 print(scores) ```