File size: 2,707 Bytes
a088911 c0ab142 a088911 c0ab142 a088911 c0ab142 a088911 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for `hlm-paraphrase-multilingual-mpnet-base-v2`
### Dataset Summary
Chromadb vectorstore for 红楼梦, created with
```
import os
from langchain.document_loaders import TextLoader
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
model_name = 'paraphrase-multilingual-mpnet-base-v2'
embedding = SentenceTransformerEmbeddings(model_name=model_name)
url = 'https://raw.githubusercontent.com/ffreemt/multilingual-dokugpt/master/docs/hlm.txt'
os.system(f'wget -c {url}')
doc = TextLoader('hlm.txt').load()
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""],
chunk_size=620,
chunk_overlap=60,
length_function=len
)
doc_chunks = text_splitter.split_documents(doc)
client_settings = Settings(chroma_db_impl="duckdb+parquet", anonymized_telemetry=False, persist_directory='db')
# takes 8-20 minutes on CPU
vectorstore = Chroma.from_documents(
documents=doc_chunks,
embedding=embedding,
persist_directory='db',
client_settings=client_settings,
)
vectorstore.persist()
```
### How to use
Download the `hlm` directory to a local directory, e.g., `db`, for example
```python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="mikeee/chroma-paraphrase-multilingual-mpnet-base-v2",
repo_type="dataset",
allow_patterns="hlm/*",
local_dir="db",
resume_download=True,
)
```
Load the vectorestore:
```python
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.vectorstores import Chroma
from chromadb.config import Settings
model_name = 'paraphrase-multilingual-mpnet-base-v2'
embedding = SentenceTransformerEmbeddings(model_name=model_name)
client_settings = Settings(
chroma_db_impl="duckdb+parquet",
anonymized_telemetry=False,
persist_directory='db/hlm'
)
db = Chroma(
# persist_directory='docs',
embedding_function=embedding,
client_settings=client_settings,
)
res = db.search("红楼梦主线", search_type="similarity", k=2)
print(res)
# [Document(page_content='通灵宝玉正面图式\u3000通灵宝玉反面图式\n\n\n\n玉宝灵通\u3000\u3000\u3000\u3000\u3000三二一\n\n仙莫\u3000\u3000\u3000\u3000\u3000\u3000知疗除\n\n寿失\u3000\u3000\u3000\u3000\u3000\u3000祸冤邪\n\n恒莫\u3000\u3000\u3000\u3000\u3000\u3000福疾崇\n\n昌忘\n\n\n\n宝钗看毕,【甲戌双行。。。
``` |