### ChatWithPinecone 🌲 This is a conversational application integrating multiple technologies: Pinecone, OpenAI embeddings, and the Chainlit framework. Here's an analysis of its key components and functionality: An index in Pinecone is was created to store and retriev vectorized data. OpenAI embeddings were used to convert text to vectors so they can be stored and searched in the Pinecone index. The following diagram shows the flow of data in the application: 1. The user enters a message in the chatbot. 2. The message is sent to the Pinecone Storage Index. 3. The user's prompt is vectorized using OpenAI embeddings and sent to the Pinecone Storage Index to retrieve the top three documents relating to the prompt. 4. The top three documents are summarized from an LLM providing an answer back to the user. 5. In-store memory is enabled using LangChain's memory caching feature. This allows the application to store the top three documents in memory for faster retrieval. ![Flow Diagram](./public/image1.png)