# %% import
from langchain.chains import ConversationChain
from langchain.chains.conversation.memory import (
    ConversationBufferMemory, ConversationSummaryMemory, ConversationBufferWindowMemory
)
from langchain_openai import OpenAI
'''
Tiktoken,developed by OpenAl,is a tool used for text tokenization.
Tokenization involves dividing a text into smaller units,such as letters or words.Tiktoken allows you to count tokens and estimate the cost of using the OpenAl
APl,which is based on token usage.It utilizes byte pair encoding (BPE),a compression algorithm that replaces frequently occurring pairs of bytes with a single
byte.
In summary,Tiktoken helps with efficient text processing,token counting,and cost estimation for using OpenAl's API.
'''
import os

# os.environ['OPENAI_API_KEY'] = ""
# os.environ['OPENAI_API_BASE'] = ""
# %%
import dotenv

dotenv.load_dotenv()
# %%

# %% model
# llm = OpenAI(temperature=0, model_name='gpt-4')
llm = OpenAI(temperature=0)

# %%
'''
What is a Memory?
Chains and Agents operate as stateless,treating each query independently.However,in applications like chatbots,its crucial to remember past interactions.
The concept of "Memory"serves that purpose.
'''

# %% Different Types Of Memories
# ConversationBufferMemory
'''
Imagine you're having a conversation with someone, and you want to remember what you've discussed so far.
The ConversationBufferMemory does exactly that in a chatbot or similar system. It keeps a record,or "buffer,"of the past parts of the conversation.

This buffer is an essential part of the context,which helps the chatbot generate better responses.The unique thing about this memory is that it stores the
previous conversation exactly as they were,without any changes.

It preserves the raw form of the conversation,allowing the chatbot to refer back to specific parts accurately.In summary,the
ConversationBufferMemory helps the chatbot remember the conversation history,enhancing the overall conversational experience.

Pros of ConversationBufferMemory:
    *Complete conversation history:It retains the entire conversation history,ensuring comprehensive context for the chatbot.
    Accurate references:By storing conversation excerpts in their original form,it enables precise referencing to past interactions,enhancing accuracy.
    "Contextual understanding:The preserved raw form of the conversation helps the chatbot maintain a deep understanding of the ongoing dialogue.
    *Enhanced responses:with access to the complete conversation history,the chatbot can generate more relevant and coherent responses.

Cons of ConversationBufferMemory
    *Increased memory usage:Storing the entire conversation history consumes memory resources,potentially leading to memor
    y constraints.
    "Potential performance impact:Large conversation buffers may slow down processing and response times,affecting the ove
    rall system performance.
    Liniited soibinisy As the conversation grows,the memory requirements and processing load may become impractical for
    extremely long conversations.
    *Privacy concerns:Storing the entire conversation history raises privacy considerations,as sensitive or personal infor
    mation may be retained in the buffer.
'''
conversation = ConversationChain(
    llm=llm,
    verbose=True,
    # ConversationBufferMemory，它只是 ChatMessageHistory 的一个包装器，用于提取变量中的消息
    memory=ConversationBufferMemory()
)
# %%
# Let's have a look at the prompt template that is being sent to the LLM
print(conversation.prompt.template)

conversation("Good morning AI")
conversation("My name is Steve")  # 如果名称小写, ai可能不认
conversation("My sister is Sandy")
print(conversation.predict(input="I stay in hyderabad, India"))

print(conversation.memory.buffer)
print(conversation.predict(input="What is my name?"))
print(conversation.predict(input="Who is my sister?"))

# %%
'''
ConversationBufferWindowMemory

Imagine you have a limited space in your memory to remember recent conversations.

The ConversationBufferWindowMemory is like having a short-term memory that only keeps track of the most recent interactions.It intentionally
drops the oldest ones to make room for new ones.

This helps manage the memory load and reduces the number of tokens used.The important thing is that it still keeps the latest parts of the conversation in
their original form,without any modifications.
So,it retains the most recent information for the chatbot to refer to,ensuring a more efficient and up-to-date conversation experience.

Pros of ConversationBufferWindowMemory:

    *Efficient memory utilization:It maintains a limited memory space by only retaining the most recent interactions,optim
    izing memory usage.
    *Reduced token count:Dropping the oldest interactions helps to keep the token count low,preventing potential token lim
    itations.
    *Unmodified context retention:The latest parts of the conversation are preserved in their original form,ensuring accur
    ate references and contextual understanding.
    *Up-to-date conversations:By focusing on recent interactions,it allows the chatbot to stay current and provide more re
    levant responses.

Cons of ConversationBufferWindowMemory:
    *Limited historical context:since older interactions are intentionally dropped,the chatbot loses access to the complet
    e conversation history,potentially impacting long-term context and accuracy.
    "Loss of older information:Valuable insights or details from earlier interactions are not retained,limiting the chatbo
    t's ability to refer back to past conversations.
    *Reduced depth of understanding:without the full conversation history,the chatbot may have a shallower understanding o
    f the user's context and needs.
    *Potential loss of context relevance:Important information or context from older interactions may be disregarded,affec
    ting the chatbot's ability to provide comprehensive responses in certain scenarios.
'''
# ConversationBufferWindowMemory
conversation = ConversationChain(
    llm=llm,
    verbose=True,
    # ConversationBufferWindowMemory保留了对话随时间推移的交互列表。
    # 它仅使用最后K个交互。这可以用于保持最近交互的滑动窗口，以便缓冲区不会过大。
    memory=ConversationBufferWindowMemory(k=1)
)

# %%
# Let's have a look at the prompt template that is being sent to the LLM
'''
Current conversation:
{history}
Human: {input}
AI:
'''
print(conversation.prompt.template)

# predict对话会被加到历史, 之前我们是手动添加的 conversation("")
conversation.predict(input="Good morning AI")
conversation.predict(input="My name is Steve")  # 如果名称小写, ai可能不认
conversation.predict(input="My sister is Sandy")

# 我们设置k=1, 所以只有一条
'''
memory:
	Human: My sister is Sandy
'''
print(f'memory:\n\t{conversation.memory.buffer}')
# Your sister is Sandy.
print(conversation.predict(input="Who is my sister?"))
# Your name is not specified in my database.
print(conversation.predict(input="What is my name?"))

# %% ConversationSummaryMemory -> 记录的是对话摘要
'''
With the ConversationBufferMemory,the length of the conversation keeps increasing,which can become a problem if it becomes too large for our LLM to
handle.

To overcome this,we introduce ConversationSummaryMemory.It keeps a summary of our past conversation snippets as our history.But how does it
summarize?Here comes the LLM to the rescue!The LLM (Language Model)helps in condensing or summarizing the conversation,capturing the key
information.

So,instead of storing the entire conversation,we store a summarized version.This helps manage the token count and allows the LLM to process the
conversation effectively.In summary,ConversationSummaryMemory keeps a condensed version of previous conversations using the power of LLM
summarization.

Pros of ConversationSummaryMemory:
    *Efficient memory management-It keeps the conversation history in a summarized form,reducing the memory load.
    *Improved processing-By condensing the conversation snippets,it makes it easier for the language model to process and
    generate responses.
    *Avoiding maxing out limitations-It helps prevent exceeding the token count limit,ensuring the prompt remains within t
    he processing capacity of the model.
    "Retains important information-The summary captures the essential aspects of previous interactions,allowing for releva
    nt context to be maintained.
    
Cons of ConversationSummaryMemory:
    *Potential loss of detail:since the conversation is summarized,some specific details or nuances from earlier interacti
    ons might be omitted.
    *Reliance on summarization quality:The accuracy and effectiveness of the summarization process depend on the language m
    odel's capability,which might introduce potential errors or misinterpretations.
    "Limited historical context:Due to summarization,the model's access to the complete conversation history may be limite
    d,potentially impacting the depth of understanding.
    *Reduced granularity:The summarized form may lack the fine-grained information present in the original conversation,po
    tentially affecting the accuracy of responses in certain scenarios.
'''

conversation = ConversationChain(
    llm=llm,
    verbose=True,
    memory=ConversationSummaryMemory(llm=llm)
)

# %%
# Let's have a look at the prompt template that is being sent to the LLM
'''
template:
	The following is a friendly conversation between a human and an AI. 
	The AI is talkative and provides lots of specific details from its context. 
	If the AI does not know the answer to a question, it truthfully says it does not know.
'''
print(f'template:\n\t{conversation.prompt.template}')

conversation.predict(input="Good morning AI")
conversation.predict(input="My name is Steve")  # 如果名称小写, ai可能不认
conversation.predict(input="My sister is Sandy")

'''
memory:
	
The human greets the AI and the AI responds with the current time and conditions in its server room. 
The AI then asks how it can assist the human, addressing them by name. 
The human reveals their sister's name and the AI greets her by name, providing the current time and conditions in the server room. 
The AI then asks how it can assist the human today.
'''
print(f'memory:\n\t{conversation.memory.buffer}')
# Your name is [insert name here]. How can I assist you today?
print(conversation.predict(input="What is my name?"))

# %% ConversationTokenBufferMemory
import tiktoken
from langchain.memory import ConversationTokenBufferMemory

'''
ConversationTokenBufferMemory is a memory mechanism that stores recent interactions in a buffer within the system's memory.
Unlike other methods that rely on the number of interactions,this memory system determines when to clear or flush interactions based on the length of tokens
used.

Tokens are units of text,like words or characters,and the buffer is cleared when the token count exceeds a certain threshold.By using token length as a
criterion,the memory system ensures that the buffer remains manageable in terms of memory usage.
This approach helps maintain efficient memory management and enables the system to handle conversations of varying lengths effectively.

<font color="blue">
    <b>Pros of ConversationTokenBufferMemory:</b>

    *Efficient memory management: By using token length instead of the number of interactions, the memory system optimizes memory usage and prevents excessive memory consumption.
    *Flexible buffer size: The system adapts to conversations of varying lengths, ensuring that the buffer remains manageable and scalable.
    *Accurate threshold determination: Flushing interactions based on token count provides a more precise measure of memory usage, resulting in a better balance between memory efficiency and retaining relevant context.
    *Improved system performance: With efficient memory utilization, the overall performance of the system, including response times and processing speed, can be enhanced.
    
<b>Cons of ConversationTokenBufferMemory:</b>

    *Potential loss of context: Flushing interactions based on token length may result in the removal of earlier interactions that could contain important context or information, potentially affecting the accuracy of responses.
    *Complexity in threshold setting: Determining the appropriate token count threshold for flushing interactions may require careful consideration and experimentation to find the optimal balance between memory usage and context retention.
    *Difficulty in long-term context retention: Due to the dynamic nature of token-based flushing, retaining long-term context in the conversation may pose challenges as older interactions are more likely to be removed from the buffer.
    *Impact on response quality: In situations where high-context conversations are required, the token-based flushing approach may lead to a reduction in the depth of understanding and the quality of responses.
<font>
'''

conversation = ConversationChain(
    llm=llm,
    verbose=True,
    # ConversationTokenBufferMemory 会在内存中保留最近的对话内容，并使用token长度而不是对话数量来决定何时刷新对话。
    memory=ConversationTokenBufferMemory(llm=llm, max_token_limit=60)
)

# %%
# Let's have a look at the prompt template that is being sent to the LLM
'''
template:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation:
{history}
Human: {input}
AI: 
'''
print(f'template:\n\t{conversation.prompt.template}')

conversation.predict(input="Good morning AI")
conversation.predict(input="My name is Steve")  # 如果名称小写, ai可能不认
conversation.predict(input="My sister is Sandy")

'''
memory:
	Human: My sister is Sandy 
'''
print(f'memory:\n\t{conversation.memory.buffer}')
# Your name is Steve. Did you forget?
print(conversation.predict(input="What is my name?"))
