File size: 2,920 Bytes
90dab86
 
1e525f0
 
 
 
 
 
 
 
 
 
 
 
 
 
90dab86
1e525f0
 
 
 
 
 
 
 
 
 
522c1f9
ca947c5
1e525f0
 
 
 
 
 
ca947c5
1e525f0
 
ca947c5
1e525f0
 
 
 
 
 
 
ca947c5
1e525f0
 
ca947c5
1e525f0
ca947c5
1e525f0
 
ca947c5
1e525f0
 
 
 
 
ca947c5
1e525f0
 
 
 
ca947c5
1e525f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca947c5
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: apache-2.0
task_categories:
  - question-answering
  - summarization
  - conversational
  - sentence-similarity
language:
  - en
pretty_name: FAISS Vector Store of Embeddings of the Chartered Financial Analysts Level 1 Curriculum
tags:
  - faiss
  - langchain
  - instructor embeddings
  - vector stores
  - LLM
---
Vector store of embeddings for CFA Level 1 Curriculum

This is a faiss vector store created with Sentence Transformer embeddings using LangChain . Use it for similarity search, question answering or anything else that leverages embeddings! 😃

Creating these embeddings can take a while so here's a convenient, downloadable one 🤗

How to use

Download data
Load to use with LangChain

```
pip install -qqq langchain sentence_transformers faiss-cpu huggingface_hub
import os
from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings

from langchain.vectorstores.faiss import FAISS
from huggingface_hub import snapshot_download
```

# download the vectorstore for the book you want
```
cache_dir="cfa_level_1_cache"
vectorstore = snapshot_download(repo_id="nickmuchi/CFA_Level_1_Text_Embeddings",
                                repo_type="dataset",
                                revision="main",
                                allow_patterns=f"books/{book}/*", # to download only the one book
                                cache_dir=cache_dir,
                                )
```
# get path to the `vectorstore` folder that you just downloaded
# we'll look inside the `cache_dir` for the folder we want
```
target_dir = f"cfa/cfa_level_1"
```

# Walk through the directory tree recursively
```
for root, dirs, files in os.walk(cache_dir):
    # Check if the target directory is in the list of directories
    if target_dir in dirs:
        # Get the full path of the target directory
        target_path = os.path.join(root, target_dir)
```

# load embeddings
# this is what was used to create embeddings for the text

```
embed_instruction = "Represent the financial paragraph for document retrieval: "
query_instruction = "Represent the question for retrieving supporting documents: "

model_sbert = "sentence-transformers/all-mpnet-base-v2"
sbert_emb = HuggingFaceEmbeddings(model_name=model_sbert)

model_instr = "hkunlp/instructor-large"
instruct_emb = HuggingFaceInstructEmbeddings(model_name=model_instr,
                                             embed_instruction=embed_instruction,
                                             query_instruction=query_instruction)

# load vector store to use with langchain
docsearch = FAISS.load_local(folder_path=target_path, embeddings=sbert_emb)

# similarity search
question = "How do you hedge the interest rate risk of an MBS?"
search = docsearch.similarity_search(question, k=4)

for item in search:
    print(item.page_content)
    print(f"From page: {item.metadata['page']}")
    print("---")

```