File size: 1,136 Bytes
b21e2dc 31d38ec |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
dataset_info:
features:
- name: context
dtype: string
- name: name
dtype: string
- name: embedding
sequence: float32
splits:
- name: train
num_bytes: 714665
num_examples: 201
download_size: 998567
dataset_size: 714665
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Modelo
- "Alibaba-NLP/gte-multilingual-base"
Puedes obtener toda la información relacionado con el modelo <a href="https://huggingface.co/Alibaba-NLP/gte-multilingual-base">aquí</a>
# Busqueda
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
from datasets import load_dataset
import numpy as np
model_name = "Alibaba-NLP/gte-multilingual-base"
model = SentenceTransformer(model_name, trust_remote_code=True)
raw_data = load_dataset('Manyah/incrustaciones')
question = ""
question_embedding = model.encode(question)
sim = [cos_sim(raw_data['train'][i]['embedding'],question_embedding).numpy() for i in range(0,len(raw_data['train']))]
index = sim.index(max(sim))
print(raw_data['train'][index]['context'])
```
|