Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,34 @@ language:
|
|
6 |
|
7 |
The following data set was vectorized with the [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) model and an index file created by faiss.
|
8 |
|
9 |
-
[oshizo/japanese-wikipedia-paragraphs](https://huggingface.co/datasets/oshizo/japanese-wikipedia-paragraphs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
The following data set was vectorized with the [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) model and an index file created by faiss.
|
8 |
|
9 |
+
[oshizo/japanese-wikipedia-paragraphs](https://huggingface.co/datasets/oshizo/japanese-wikipedia-paragraphs)
|
10 |
+
|
11 |
+
|
12 |
+
|
13 |
+
## Usage
|
14 |
+
|
15 |
+
First, download index_me5-base_IVF2048_PQ192.faiss from this repository.
|
16 |
+
|
17 |
+
```python
|
18 |
+
import faiss
|
19 |
+
import datasets
|
20 |
+
from sentence_transformers import SentenceTransformer
|
21 |
+
|
22 |
+
ds = datasets.load_dataset("oshizo/japanese-wikipedia-paragraphs", split="train")
|
23 |
+
|
24 |
+
index = faiss.read_index("./index_me5-base_IVF2048_PQ192.faiss")
|
25 |
+
|
26 |
+
model = SentenceTransformer("intfloat/multilingual-e5-base")
|
27 |
+
|
28 |
+
question = "日本で二番目に高い山は?"
|
29 |
+
emb = model.encode(["query: " + question])
|
30 |
+
scores, indexes = index.search(emb, 10)
|
31 |
+
scores = scores[0]
|
32 |
+
indexes = indexes[0]
|
33 |
+
|
34 |
+
results = []
|
35 |
+
for idx, score in zip(indexes, scores):
|
36 |
+
idx = int(idx)
|
37 |
+
passage = ds[idx]
|
38 |
+
passage["score"] = score
|
39 |
+
results.append((passage))
|