import os

import bs4 as bs4
from dotenv import load_dotenv
from langchain_text_splitters import CharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain.chains.summarize import load_summarize_chain
from langchain_openai import ChatOpenAI

load_dotenv()

# 1.创建模型
model = ChatOpenAI(
    model='qwen-plus',
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",
)

# 通过DocumentLoader加载数据
loader = WebBaseLoader(
    web_paths=['https://lilianweng.github.io/posts/2023-06-23-agent/'],
    bs_kwargs=dict(
        parse_only=bs4.SoupStrainer(class_=('post-header', 'post-title', 'post-content'))
    )
)

docs = loader.load()
# print(docs)

text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=1000, chunk_overlap=0)
split_docs = text_splitter.split_documents(docs)


# 迭代思维，将前面片段的摘要传到下一个片段，不断迭代，得出最终的文本摘要
chain = load_summarize_chain(llm=model, chain_type='refine')

result = chain.invoke(split_docs)
print(result['output_text'])

#
# **Final Refined Summary**
#
# LLM-powered autonomous agents are systems capable of performing complex, goal-oriented tasks without continuous
# human intervention. These agents typically incorporate planning components to break down tasks into actionable
# steps, memory components to retain and utilize past experiences or context, and tool-use components that allow
# interaction with external environments or APIs. The integration of these components enables agents to operate
# independently in dynamic and complex environments.
#
# Recent developments show that such agents can be applied to software development tasks. For instance, an autonomous
# agent may receive a natural language description of a desired software system and proceed to design and implement
# it fully—laying out the architecture, defining core classes and functions, writing all necessary code files,
# and including dependency management. This process demonstrates the agent’s ability to decompose a high-level goal
# into modular, executable components and realize them in a structured, functional codebase.
#
# Such implementations often follow best practices, including modularity, documentation, and proper project
# structure, and may involve tools like `pytest` for testing, `dataclasses` for structured data modeling,
# and appropriate dependency management files (e.g., `requirements.txt` for Python). This context serves as a
# practical demonstration of the capabilities of autonomous agents in real-world development workflows.
#
# Moreover, the agent is expected to reason through the implementation, explicitly state assumptions, and ensure that
# all components are fully realized in code. The process includes defining the core classes, functions, and methods,
# followed by generating each file in a structured format with appropriate language-specific conventions. The agent
# must ensure that all files are mutually compatible, functionally complete, and adhere to best practices in software
# engineering.
#
# Several frameworks and implementations have emerged to explore these capabilities, including **AutoGPT**,
# **GPT-Engineer**, and **HuggingGPT**, each demonstrating different aspects of autonomous task execution.
# Additionally, research like **Reflexion** explores agents with dynamic memory and self-reflection, while **MRKL
# Systems** and **Toolformer** focus on integrating discrete reasoning and tool use into LLM-based architectures.
# These systems highlight the importance of modular design and external knowledge integration for robust agent
# performance.
#
# However, several challenges remain in the development and deployment of such systems. One major limitation is the
# **finite context length** of LLMs, which restricts the amount of historical information, instructions, API context,
# and responses that can be maintained in active memory. While external mechanisms like vector stores and retrieval
# systems (e.g., ScaNN, Weaviate) can expand access to knowledge, they lack the nuance and depth of full attention
# over long sequences. This limitation also hampers the agent’s ability to perform self-reflection and learn from
# extended interaction histories.
#
# Another key challenge lies in **long-term planning and task decomposition**. Autonomous agents struggle to maintain
# coherent, multi-step plans when faced with unexpected errors or changing conditions. Unlike humans,
# who adapt through trial and error, LLM-based agents often fail to revise strategies effectively, limiting their
# robustness in dynamic environments. Approaches like **Tree of Thoughts** and **LLM+P** attempt to enhance planning
# capabilities, but consistent and reliable execution remains a work in progress.
#
# Additionally, the **reliability of the natural language interface** remains a concern. Since most agent systems
# rely on natural language as the interface between LLMs and external components (e.g., memory, tools), the output's
# correctness and consistency are not guaranteed. Issues like formatting errors or unexpected refusal to follow
# instructions can disrupt the agent’s operation, requiring significant engineering efforts to parse and validate
# model outputs.
#
# Despite these challenges, the application of autonomous agents in software development and beyond highlights their
# transformative potential—enabling end-to-end code generation from natural language prompts while maintaining
# architectural integrity and implementation fidelity. With continued research and development, LLM-powered
# autonomous agents are poised to play a pivotal role in automating complex workflows across a wide range of domains.
#
# ---
#
# **Citation**: Weng, Lilian. (Jun 2023). “LLM-powered Autonomous Agents”. Lil’Log.
# https://lilianweng.github.io/posts/2023-06-23-agent/. BibTeX: @article{weng2023agent, title   = "LLM-powered
# Autonomous Agents", author  = "Weng, Lilian", journal = "lilianweng.github.io", year    = "2023", month   = "Jun",
# url     = "https://lilianweng.github.io/posts/2023-06-23-agent/" }