mikeee's picture
Update README.md
c0ab142
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/datasetcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/datasets-cards
{}
---
# Dataset Card for `hlm-paraphrase-multilingual-mpnet-base-v2`
### Dataset Summary
Chromadb vectorstore for 红楼梦, created with
```
import os
from langchain.document_loaders import TextLoader
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
model_name = 'paraphrase-multilingual-mpnet-base-v2'
embedding = SentenceTransformerEmbeddings(model_name=model_name)
url = 'https://raw.githubusercontent.com/ffreemt/multilingual-dokugpt/master/docs/hlm.txt'
os.system(f'wget -c {url}')
doc = TextLoader('hlm.txt').load()
text_splitter = RecursiveCharacterTextSplitter(
separators=["\n\n", "\n", ".", "!", "?", ",", " ", ""],
chunk_size=620,
chunk_overlap=60,
length_function=len
)
doc_chunks = text_splitter.split_documents(doc)
client_settings = Settings(chroma_db_impl="duckdb+parquet", anonymized_telemetry=False, persist_directory='db')
# takes 8-20 minutes on CPU
vectorstore = Chroma.from_documents(
documents=doc_chunks,
embedding=embedding,
persist_directory='db',
client_settings=client_settings,
)
vectorstore.persist()
```
### How to use
Download the `hlm` directory to a local directory, e.g., `db`, for example
```python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="mikeee/chroma-paraphrase-multilingual-mpnet-base-v2",
repo_type="dataset",
allow_patterns="hlm/*",
local_dir="db",
resume_download=True,
)
```
Load the vectorestore:
```python
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.vectorstores import Chroma
from chromadb.config import Settings
model_name = 'paraphrase-multilingual-mpnet-base-v2'
embedding = SentenceTransformerEmbeddings(model_name=model_name)
client_settings = Settings(
chroma_db_impl="duckdb+parquet",
anonymized_telemetry=False,
persist_directory='db/hlm'
)
db = Chroma(
# persist_directory='docs',
embedding_function=embedding,
client_settings=client_settings,
)
res = db.search("红楼梦主线", search_type="similarity", k=2)
print(res)
# [Document(page_content='通灵宝玉正面图式\u3000通灵宝玉反面图式\n\n\n\n玉宝灵通\u3000\u3000\u3000\u3000\u3000三二一\n\n仙莫\u3000\u3000\u3000\u3000\u3000\u3000知疗除\n\n寿失\u3000\u3000\u3000\u3000\u3000\u3000祸冤邪\n\n恒莫\u3000\u3000\u3000\u3000\u3000\u3000福疾崇\n\n昌忘\n\n\n\n宝钗看毕,【甲戌双行。。。
```