Geraldine commited on
Commit
539c591
1 Parent(s): af0f8d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +65 -1
README.md CHANGED
@@ -18,11 +18,11 @@ The parameters passed in the url request are :
18
  - fq=publicationDateY_i:[2013%20TO%202023]
19
  - fl=halId_s,doiId_s,uri_s,title_s,subTitle_s,authFullName_s,producedDate_s,journalTitle_s,journalPublisher_s,abstract_s,fr_keyword_s,openAccess_bool,submitType_s
20
 
 
21
  The embeddings corpus hal_embeddings.pkl stores the embeddings of the "combined" column values converted in vectors with the sentence-transformers/all-MiniLM-L6-v2 embeddings model.
22
 
23
  Furthermore, all the dataset (except the "abstract" and "combined" columns) has been converted in a Knowledge Graph and stored in a Neo4j Graph store which persists texts and embeddings.
24
  The text embeddings model used is nomic-embed-text-v1.5.
25
- The Knowledge Graph Index is persiste
26
 
27
 
28
  ## Metadata extraction
@@ -157,3 +157,67 @@ article_data_list
157
 
158
  The KnowledgeGraphIndex is persisted in the /index_storage folder, and can be easely reloaded in a Neo4j database and/or reloaded to be queried by a LlamaIndex KnowledgeGraphQueryEngine.
159
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  - fq=publicationDateY_i:[2013%20TO%202023]
19
  - fl=halId_s,doiId_s,uri_s,title_s,subTitle_s,authFullName_s,producedDate_s,journalTitle_s,journalPublisher_s,abstract_s,fr_keyword_s,openAccess_bool,submitType_s
20
 
21
+ The combined column conatins a concatenation of the textual contents of the three columns : title_s, subTitle_s and abstract_s.
22
  The embeddings corpus hal_embeddings.pkl stores the embeddings of the "combined" column values converted in vectors with the sentence-transformers/all-MiniLM-L6-v2 embeddings model.
23
 
24
  Furthermore, all the dataset (except the "abstract" and "combined" columns) has been converted in a Knowledge Graph and stored in a Neo4j Graph store which persists texts and embeddings.
25
  The text embeddings model used is nomic-embed-text-v1.5.
 
26
 
27
 
28
  ## Metadata extraction
 
157
 
158
  The KnowledgeGraphIndex is persisted in the /index_storage folder, and can be easely reloaded in a Neo4j database and/or reloaded to be queried by a LlamaIndex KnowledgeGraphQueryEngine.
159
 
160
+ ```
161
+ import pandas as pd
162
+ from datasets import load_dataset
163
+ from llama_index.core import SimpleDirectoryReader, KnowledgeGraphIndex, StorageContext, load_index_from_storage
164
+ from llama_index.graph_stores.neo4j import Neo4jGraphStore
165
+ from llama_index.vector_stores.neo4jvector import Neo4jVectorStore
166
+ from llama_index.embeddings.nomic import NomicEmbedding
167
+ from llama_index.llms.groq import Groq
168
+ from llama_index.core import Settings
169
+ import nest_asyncio
170
+
171
+ # Load the dataset
172
+ hal_data = load_dataset("Geraldine/hal_univcotedazur_shs_articles_2013-2023", data_files="hal_data.csv")
173
+ df = pd.DataFrame(hal_data["train"])
174
+ df = df.drop(columns=["abstract_s","combined"])
175
+ df.to_csv("hal_data.csv", index=False, encoding="utf-8")
176
+
177
+ # Document reader
178
+ reader = SimpleDirectoryReader(input_files=["./hal_data.csv"])
179
+ documents = reader.load_data()
180
+
181
+ # Embeddings & LLM
182
+ NOMIC_API_KEY = "..."
183
+ GROQ_API_KEY = "..."
184
+
185
+ nest_asyncio.apply()
186
+
187
+ embed_model = NomicEmbedding(
188
+ api_key=NOMIC_API_KEY,
189
+ dimensionality=768,
190
+ model_name="nomic-embed-text-v1.5",
191
+ )
192
+
193
+ llm = Groq(model="mixtral-8x7b-32768", api_key=GROQ_API_KEY)
194
+
195
+ Settings.llm = llm
196
+ Settings.embed_model = embed_model
197
+ Settings.chunk_size = 512
198
+
199
+ # Neo4j Graph store & KnowledgeGraph index creation
200
+ graph_store = Neo4jGraphStore(
201
+ username="...",
202
+ password="...",
203
+ url="...",
204
+ )
205
+
206
+ storage_context = StorageContext.from_defaults(graph_store=graph_store)
207
+
208
+ index = KnowledgeGraphIndex.from_documents(
209
+ documents,
210
+ storage_context=storage_context,
211
+ include_embeddings=True,
212
+ max_triplets_per_chunk=2,
213
+ )
214
+
215
+ # Persist index
216
+ index.storage_context.persist("./index_storage")
217
+
218
+ # Reload index
219
+ storage_context = StorageContext.from_defaults(persist_dir="./index_storage")
220
+ index = load_index_from_storage(storage_context)
221
+ ```
222
+
223
+