aisuko's picture
update the README.md
bbf209b verified
|
raw
history blame
No virus
1.85 kB
---
license: mit
language:
- en
---
The original data from http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz.
We use `nq_distilbert-base-v1` model encode all the data to the PyTorch Tensors. And `normalize` the embeddings by using `sentence_transformers.util.normalize_embeddings`.
```python
!pip install sentence-transformers==2.3.1
```
```python
import os
import json
import gzip
from sentence_tranformers.util import http_get
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import normalize_embeddings
os.environ['DATASET_NAME']='simplewiki-2020-11-01.jsonl.gz'
os.environ['DATASET_URL']='http://sbert.net/datasets/simplewiki-2020-11-01.jsonl.gz'
os.environ['MODEL_NAME']='multi-qa-MiniLM-L6-cos-v1'
os.environ['CROSS_CODE_NAME']='cross-encoder/ms-marco-MiniLM-L-6-v2'
http_get(os.getenv('DATASET_URL'), os.getenv('DATASET_NAME'))
passages=[]
with gzip.open(os.getenv('DATASET_NAME'), 'rt', encoding='utf-8') as fIn:
for line in fIn:
data=json.loads(line.strip())
# add all paragraphs
# passages.extend(data['paragraphs'])
# only add the first paragraph
# passages.append(data['paragraph'][0])
for paragraph in data['paragraphs']:
# We encode the passages as [title, text]
passages.append([data['title'], paragraph])
print('Passages:', len(passages))
bi_encoder=SentenceTransformer('nq-distilbert-base-v1')
bi_encoder.max_seq_length=256
bi_encoder.to('cuda')
corpus_embeddings=bi_encoder.encode(passages, convert_to_tensor=True, show_progress_bar=True).to('cuda')
corpus_embeddings=normalize_embeddings(corpus_embeddings)
len(corpus_embeddings)
import pandas as pd
embedding_data=pd.DataFrame(corpus_embeddings.cpu())
embedding_data.to_csv('simple_english_wikipedia_2020_11_01.csv', index=False)
```