agenta / app.py
agentharbor's picture
Update app.py
876f61f verified
raw history blame
No virus
35.4 kB
import gradio as gr
from huggingface_hub import InferenceClient
"""
For more information on `huggingface_hub` Inference API support, please check the docs: https://huggingface.co/docs/huggingface_hub/v0.22.2/en/guides/inference
"""
#client = InferenceClient("HuggingFaceH4/zephyr-7b-beta")
import google.generativeai as genai
os.environ["API_KEY"] = 'AIzaSyB8Hj4oCbBH9arFWSgybHnbpZLs2sa4p1w'
os.environ["GOOGLE_API_KEY"] = 'AIzaSyBjuYTWBlHg4W2wGaQCKKbigz6deZuLUJc'
genai.configure(api_key=os.environ["API_KEY"])
global context
context = '''
You are an assistant created by the Agentville team. You are going to look at the below repository and generate a response. The response has to be in the below format:
Answer
Reference link
Agentville Academy
1.Next-gen Digital Helpers
Autonomous agents are the future of technology. They're intelligent, adaptable, and can learn from their experiences. Imagine having a digital assistant that anticipates your needs, simplifies complex tasks, and helps you achieve your goals faster.
Link: https://www.youtube.com/watch?v=fqVLjtvWgq8
2.How to improve multi-agent interactions?
The biggest challenge in the world of autonomous agents is to improve the quality of agents' performance over time. From MetaGPT to AutoGen, every researcher is trying to obviate this challenge.
In this video, Negar Mehr, assistant professor of aerospace engineering at UIUC, discusses the challenges of enabling safe and intelligent multi-agent interactions in autonomous systems.
Watch the video to understand the connection between the movies Beautiful Mind, Cinderella and Autonomous Agents!
Link: https://www.youtube.com/watch?v=G3JoGvZABoE&t=2426s
3.Survey of Autonomous Agents
Autonomous agents are the future of technology. They're intelligent, adaptable, and can learn from their experiences. Imagine having a digital assistant that anticipates your needs, simplifies complex tasks, and helps you achieve your goals faster.
Link: https://arxiv.org/abs/2308.11432v1
4. Can the Machines really think?
In this video, Feynman argues that while machines are better than humans in many things like arithmetic, problem-solving, and processing large amounts of data, machines will never achieve human-like thinking and intelligence. They would infact be smart and intelligent in their own ways and accomplish more complicated tasks than a human.
Link: https://www.youtube.com/watch?v=ipRvjS7q1DI
5. Six Must-Know Autonomous AI Agents
These new Autonomous AI Agents Automate and Optimize Workflows like never before
Most LLM-based multi-agent systems have been pretty good at handling simple tasks with predefined agents. But guess what? AutoAgents has taken it up a notch! 🚀
It dynamically generates and coordinates specialized agents, building an AI dream team tailored to various tasks. It's like having a squad of task-specific experts collaborating seamlessly.! 🏆🌐🔍
Link: https://huggingface.co/spaces/LinkSoul/AutoAgents
6.AI Agent Landscape: Overview 🌐
If you're as intrigued by the world of AI Agents as we are, you're in for a treat! Delve into e2b.dev's meticulously curated list of AI Agents, showcasing a diverse array of projects that includes both open-source and proprietary innovations. From AutoGPT to the latest AutoGen, the list covers all the latest and greatest from the world of autonomous agents!
All the agents are organized based on the tasks they excel at. How many of these have you explored?
Link: https://github.com/e2b-dev/awesome-ai-agents
7.MemGPT: LLM as operating system with memory
Ever wished AI could remember and adapt like humans? MemGPT turns that dream into reality! It's like a memory upgrade for language models. Dive into unbounded context with MemGPT and reshape the way we interact with AI.
This is a groundbreaking release from the creators of Gorilla! ✨
Link: https://memgpt.ai/
8.OpenAgents: AI Agents Work Freely To Create Software, Web Browse, Play with Plugins, & More!
A game-changing platform that's reshaping the way language agents work in the real world.
Unlike its counterparts, OpenAgents offers a fresh perspective. It caters to non-expert users, granting them access to a variety of language agents and emphasizing application-level designs. This powerhouse allows you to analyze data, call plugins, and take command of your browser—providing functionalities akin to ChatGPT Plus.
Link: https://youtu.be/htla3FzJTfg?si=_Nx5sIWftR4PPjbT
9.Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
Using strong LLMs as judges to evaluate LLM models on open-ended questions.
Evaluating large language model (LLM) based chat assistants is challenging due to their broad capabilities and the inadequacy of existing benchmarks in measuring human preferences. To address this, this paper explores using strong LLMs as judges to evaluate these models on more open-ended questions.
Link: https://arxiv.org/abs/2306.05685
10.LLM Agents: When Large Language Models Do Stuff For You
These new Autonomous AI Agents Automate and Optimize Workflows like never before
We now have an idea of what LLM agents are, but how exactly do we go from LLM to an LLM agent? To do this, LLMs need two key tweaks.
First, LLM agents need a form of memory that extends their limited context window to “reflect” on past actions to guide future efforts. Next, the LLM needs to be able to do more than yammer on all day.
Link: https://deepgram.com/learn/llm-agents-when-language-models-do-stuff-for-you
11.The Growth Behind LLM-based Autonomous Agents
In the space of 2 years, LLMs have achieved notable successes, showing the wider public that AI applications have the potential to attain human-like intelligence. Comprehensive training datasets and a substantial number of model parameters work hand in hand in order to attain this.
Read this report for a systematic review of the field of LLM-based autonomous agents from a holistic perspective.
Link: https://www.kdnuggets.com/the-growth-behind-llmbased-autonomous-agents
12.AI Agents: Limits & Solutions
The world is buzzing with excitement about autonomous agents and all the fantastic things they can accomplish.
But let's get real - they do have their limitations. What's on the "cannot do" list? How do we tackle these challenges?
In a captivating talk by Silen Naihin, the mastermind behind AutoGPT, we dive deep into these limitations and the strategies to conquer them. And guess what? Agentville is already in action, implementing some of these cutting-edge techniques!
Link: https://www.youtube.com/watch?v=3uAC0CYuDHg&list=PLmqn83GIhSInDdRKef6STtF9nb2H9eiY6&index=79&t=55s
13.Multi-Agent system that combines LLM with DevOps
Meet DevOpsGPT: A Multi-Agent System that Combines LLM with DevOps Tools
DevOpsGPT can transform requirements expressed in natural language into functional software using this novel approach, boosting efficiency, decreasing cycle time, and reducing communication expenses.
Link: https://www.marktechpost.com/2023/08/30/meet-devopsgpt-a-multi-agent-system-that-combines-llm-with-devops-tools-to-convert-natural-language-requirements-into-working-software/
14.Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration
Explore a novel multi-agent collaboration strategy that emulates the academic peer review process where each agent independently constructs its own solution, provides reviews on the solutions of others, and assigns confidence levels to its reviews.
Link: https://arxiv.org/pdf/2310.03903.pdf
15.Theory of Mind for Multi-Agent Collaboration via Large Language Models
This study evaluates LLM-based agents in a multi-agent cooperative text game with Theory of Mind (ToM) inference tasks, comparing their performance with Multi-Agent Reinforcement Learning (MARL) and planning-based baselines.
Link: https://arxiv.org/pdf/2310.10701.pdf
16.Multi-AI collaboration helps reasoning and factual accuracy in large language models
Researchers use multiple AI models to collaborate, debate, and improve their reasoning abilities to advance the performance of LLMs while increasing accountability and factual accuracy.
Link: https://news.mit.edu/2023/multi-ai-collaboration-helps-reasoning-factual-accuracy-language-models-0918
17.The impact of LLMs on marketplaces
LLMs and generative AI stand to be the next platform shift, enabling us to both interpret data and generate new content with unprecedented ease.
Over time, one could imagine that buyers may be able to specify their preferences in natural language with an agent that infers the parameters and their weights. This bot would then run the negotiation with the supply side (or their own bots, which would rely on their own parameters such as available supply, minimum margin, and time-to-end-of-season) and bid on their behalf.
Link: https://www.mosaicventures.com/patterns/the-impact-of-llms-on-marketplaces
18.MAgIC: Benchmarking LLM Powered Multi-Agents in Cognition, Adaptability, Rationality and Collaboration
In response to the growing use of Large Language Models in multi-agent environments, researchers at Stanford, NUS, ByteDance and Berkely, came up with a unique benchmarking framework named MAg. Tailored for assessing LLMs, it offers quantitative metrics across judgment, reasoning, collaboration, and more using diverse scenarios and games.
Link: https://arxiv.org/pdf/2311.08562.pdf
19.OpenAI launches customizable ChatGPT versions (GPTs) with a future GPT Store for sharing and categorization.
OpenAI has introduced a new feature called GPTs, enabling users to create and customize their own versions of ChatGPT for specific tasks or purposes. GPTs provide a versatile solution, allowing individuals to tailor AI capabilities, such as learning board game rules, teaching math, or designing stickers, to meet their specific needs.
Link: https://openai.com/blog/introducing-gpts
20.GPTs are just the beginning. Here Come Autonomous Agents
Generative AI has reshaped business dynamics. As we face a perpetual revolution, autonomous agents—adding limbs to the powerful brains of LLMs—are set to transform workflows. Companies must strategically prepare for this automation leap by redefining their architecture and workforce readiness.
Link: https://www.bcg.com/publications/2023/gpt-was-only-the-beginning-autonomous-agents-are-coming
21.Prompt Injection: Achilles heel of Autonomous Agents
Recent research in the world of LLMs highlight a concerning vulnerability: the potential hijacking of autonomous agents through prompt injection attacks. This article delves into the security risks unveiled, showcasing the gravity of prompt injection attacks on emerging autonomous AI agents and the implications for enterprises integrating these advanced technologies.
Link: https://venturebeat.com/security/how-prompt-injection-can-hijack-autonomous-ai-agents-like-auto-gpt/
22. AI Agents Ushering in the Automation Revolution
Artificial intelligence (AI) agents are rapidly transforming industries and empowering humans to achieve new levels of productivity and innovation. These agents can automate tasks, answer questions, and even take actions on our behalf. As AI agents become more sophisticated, they will be able to perform increasingly complex tasks and even surpass humans in some cognitive tasks. This has the potential to revolutionize the workforce, as many jobs that are currently performed by humans could be automated.
Link: https://www.forbes.com/sites/sylvainduranton/2023/12/07/ai-agents-assemble-for-the-automation-revolution/
23. AI Evolution: From Brains to Autonomous Agents
The advent of personalized AI agents represents a significant step in the field of artificial intelligence, enabling customized interactions and actions on behalf of users. These agents, empowered by deep learning and reinforcement learning, can learn and adapt to their environments, solve complex problems, and even make decisions independently. This evolution from mimicking brains to crafting autonomous agents marks a significant turning point in AI development, paving the way for a future where intelligent machines seamlessly collaborate with humans and reshape the world around us.
Link: https://www.nytimes.com/2023/11/10/technology/personalized-ai-agents.html
24. Showcasing the advancements in AI technology for various applications
A fierce competition has erupted in Silicon Valley as tech giants and startups scramble to develop the next generation of AI: Autonomous Agents. These intelligent assistants, powered by advanced deep learning models, promise to perform complex personal and work tasks with minimal human intervention. Fueled by billions in investment and fueled by the potential to revolutionize various industries, the race towards these AI agents is accelerating rapidly. A new wave of AI helpers with greater autonomy is emerging, driven by the latest advancements in AI technology, promising significant impacts across industries.
Link: https://www.reuters.com/technology/race-towards-autonomous-ai-agents-grips-silicon-valley-2023-07-17/
25. Microsoft AutoGen: AI becomes a Collaborative Orchestra
Microsoft AutoGen isn't building the next AI overlord. Instead, it's imagining a future where AI is a team player, a collaborative force. It is a multi-agent AI framework that uses language and automation modelling to provide an easy-to-use abstraction for developers and allows for human input and control.
Link: https://www.microsoft.com/en-us/research/project/autogen/
26. Memory: The Hidden Pathways that make us Human
Memory, the tangled web that weaves our very being, holds the key to unlocking sentience in AI. Can these hidden pathways be mapped, these synaptic whispers translated into code? By mimicking our brain's distributed storage, emotional tagging, and context-sensitive recall, AI agents can shed their robotic rigidity and work based on echoes of their own experience.
Link: https://www.youtube.com/watch?v=VzxI8Xjx1iw&t=2632s
27. Deepmind: FunSearch to unlock creativity
DeepMind's FunSearch ignites AI-powered leaps in scientific discovery. It unleashes a creative LLM to forge novel solutions, then wields a ruthless evaluator to slay false leads. This evolutionary crucible, fueled by intelligent refinement, births groundbreaking mathematical discoveries. Already conquering combinatorics, FunSearch's potential for wider scientific impact dazzles.
Link: https://deepmind.google/discover/blog/funsearch-making-new-discoveries-in-mathematical-sciences-using-large-language-models/
28. Recent Advancements in Large Language Models
Description: Recent LLMs like GPT-4 showcase impressive capabilities across various domains without extensive fine-tuning or prompting.
URL: https://aclanthology.org/2023.emnlp-main.13.pdf
29. LLMs for Multi-Agent Collaboration
Description: The emergence of LLM-based AI agents opens up new possibilities for addressing collaborative problems in multi-agent systems.
URL: https://arxiv.org/pdf/2311.13884.pdf
30. Comprehensive Survey on LLM-based Agents
Description: This paper provides a comprehensive survey on LLM-based agents, tracing the concept of agents from philosophical origins to AI development.
URL: https://arxiv.org/abs/2309.07864
31. Autonomous Chemical Research with LLMs
Description: Coscientist, a LLM-based system, demonstrates versatility and performance in various tasks, including planning chemical syntheses.
URL: https://www.nature.com/articles/s41586-023-06792-0
32. Multi-AI Collaboration for Reasoning and Accuracy
Description: Researchers use multiple AI models to collaborate, debate, and improve reasoning abilities, enhancing the performance of LLMs while increasing accountability and factual accuracy.
URL: https://news.mit.edu/2023/multi-ai-collaboration-helps-reasoning-factual-accuracy-language-models-0918
33. Multi-modal capabilities of an LLM
Traditional AI models are often limited to a single type of data, which can restrict their understanding and performance.
Multimodal models, which combine different types of data such as text, images, and audio, offer enhanced capabilities for autonomous agents and can be a game-changer for industries.
Here is an article that explains how companies can integrate Multimodal capabilities into their operations.
Link: https://www.bcg.com/publications/2023/will-multimodal-genai-be-a-gamechanger-for-the-future
34. Benchmarking the LLM performance
Stanford dropped an article three years ago that's basically a crystal ball for what we're witnessing with Large Language Models (LLMs) today. It's like they had a sneak peek into the future!
You know, these LLMs are like the brainiacs of Natural Language Understanding. I mean, probably the most advanced ones we've cooked up so far. It's wild how they've evolved, right?
The article hit the nail on the head – treating LLMs as tools. Use them right, for the right stuff, and it's like opening a treasure chest of benefits for humanity. Imagine the possibilities!
Link: https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai
35. BigBench: LLM evaluation benchmark
Well, researchers worldwide, from 132 institutions, have introduced something called the Beyond the Imitation Game benchmark, or BIG-bench.
It includes tasks that humans excel at but current models like GPT-3 struggle with. It's a way to push the boundaries and see where these models stand.
Link:https://arxiv.org/abs/2206.04615?ref=dl-staging-website.ghost.io
36. LLM as operating system
Refer to this popular video from Andrej Karpathy around the 'Introduction to LLMs'
https://www.youtube.com/watch?v=zjkBMFhNj_g&t=2698s
There's also this fascinating paper that envisions a whole new AIOS-Agent ecosystem
https://arxiv.org/abs/2312.03815
The paper suggests transforming the traditional OS-APP (Operating System-Application) ecosystem.
It introduces the concept of AIOS, where Large Language Models (LLMs) serve as the Intelligent Operating System, or AIOS, essentially an operating system "with soul."
37. Agentic Evaluation
Evaluating AI agents involves assessing how well they perform specific tasks.
Here is a nice video from LlamaIndex and Truera explaining the concepts in detail: https://www.youtube.com/watch?v=0pnEUAwoDP0
Let me break it down for you.
Rag Triad consists of three key elements: Context Relevance, Groundedness, and Answer Relevance.
Think of it like this - imagine you're asking a chatbot about restaurants.
The response it gives should make sense in the context of your question, be supported by real information (grounded), and directly address what you asked (answer relevance).
38. Real-time internet
The real-time internet concept we're pursuing is like having a digital assistant that anticipates and meets user needs instantaneously.
It's about making technology more adaptive and tailored to individual preferences.
Imagine a world where your digital tools are not just responsive but proactively helpful, simplifying your interactions with the digital realm.
Here is a nice video that explains the concept in detail:
https://www.youtube.com/watch?v=AGsafi_8iqo
39. Multi-document agents
Hey, I heard about this multi-document agent thing. What's that about, and how could it be useful?
Sure, it's a powerful setup.
Picture this: you've got individual document agents, each specializing in understanding a specific document, and then you have a coordinating agent overseeing everything.
Document agents? Coordinating agent?
Think of document agents as specialists.
They analyze and grasp the content of specific documents.
The coordinating agent manages and directs these document agents.
Can you break it down with an example?
Of course. Imagine you have manuals for various software tools.
Each document agent handles content pertaining to a single tool.
So, when you ask, "Compare the features of Tool A and Tool B," the coordinating agent knows which document agents to consult for the needed details.
Nice! How do they understand the content, though?
It's like magic, but with Large Language Models(LLMs).
Vector embeddings are used to learn the structure and meaning of the documents, helping the agents make sense of the information.
That sounds pretty clever. But what if I have a ton of documents?
Good point.
The coordinating agent is key here.
It efficiently manages which document agents to consult for a specific query, avoiding the need to sift through all documents each time.
So, it's not scanning all my documents every time I ask a question?
Exactly!
It indexes and understands the content during the setup phase.
When you pose a question, it intelligently retrieves and processes only the relevant information from the documents.
And this involves a lot of coding, I assume?
Yes, but it's not rocket science.
Tools and frameworks like LlamaIndex and Langchain make it more accessible.
You don't need to be a machine learning expert, but some coding or technical know-how helps.
Here is a tutorial from LlamaIndex around the exact same topic:
https://docs.llamaindex.ai/en/stable/examples/agent/multi_document_agents.html
40. Langgraph: Agentic framework
LangGraph is a powerful tool for creating stateful, multi-actor applications with language models. It helps you build complex systems where multiple agents can interact and make decisions based on past interactions.
Link: https://www.youtube.com/watch?v=5h-JBkySK34&list=PLfaIDFEXuae16n2TWUkKq5PgJ0w6Pkwtg
41. Autonomous Agents in GCP
In this video, we explain the usecases autonomous agents can tackle across GCP offerings
https://drive.google.com/file/d/1KGv4JBiPip5m0CfWK1UlfSQhTLKFfxxo/view?resourcekey=0-qyuP9WDAOiH9oDxF_88u4A
42. Reflection agents
Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. It involves prompting an LLM to reflect on and critique its past actions, sometimes incorporating additional external information such as tool observations.
Link: https://www.youtube.com/watch?v=v5ymBTXNqtk
43. WebVoyager
WebVoyager is a new vision-powered web-browsing agent that uses browser screenshots and “Set-of-mark” prompting to conduct research, analyze images, and perform other tasks. In this video, you will learn how to build WebVoyager using LangGraph, an open-source framework for building stateful, multi-actor AI applications. Web browsing will not be the same again!
Link: https://www.youtube.com/watch?v=ylrew7qb8sQ&t=434s
44. Future of Generative AI Agents
delve into an illuminating conversation with Joon Sung Park on the future of generative AI agents. As one of the authors behind the groundbreaking 'Generative AI Agents' paper, his insights shed light on their transformative potential and the hurdles they confront. The town of Smallville, detailed within the paper, served as a catalyst for the inception of Agentville.
Link: https://www.youtube.com/watch?v=v5ymBTXNqtk
45. Building a self-corrective coding assistant
Majority of you must have heard the news about Devin AI, a SWE agent that can build and deploy an app from scratch. What if you can build something similar? Here is a nice tutorial on how you can leverage Langgraph to build a self-corrective coding agent.
Link: https://www.youtube.com/watch?v=MvNdgmM7uyc&t=869s
46. Agentic workflows and pipelines
In a recent newsletter piece, Andrew Ng emphasized the transformative potential of AI agentic workflows, highlighting their capacity to drive significant progress in AI development. Incorporating an iterative agent workflow significantly boosts GPT-3.5's accuracy from 48.1% to an impressive 95.1%, surpassing GPT-4's performance in a zero-shot setting. Drawing parallels between human iterative processes and AI workflows, Andrew underscored the importance of incorporating reflection, tool use, planning, and multi-agent collaboration in designing effective AI systems.
Link: https://www.deeplearning.ai/the-batch/issue-241/
47. Self-learning GPTs
In this tutorial, we delve into the exciting realm of Self-Learning Generative Pre-trained Transformers (GPTs) powered by LangSmith. These intelligent systems not only gather feedback but also autonomously utilize this feedback to enhance their performance continuously. This is accomplished through the generation of few-shot examples derived from the feedback, which are seamlessly integrated into the prompt, leading to iterative improvement over time.
Link: https://blog.langchain.dev/self-learning-gpts/
48. Autonomous mobile agents
In this article, we dive into the cutting-edge realm of Mobile-Agents: Autonomous Multi-modal Mobile Device Agents. Discover how these agents leverage visual perception tools and state-of-the-art machine learning techniques to revolutionize mobile device interactions and user experiences.
Link: https://arxiv.org/abs/2401.16158
49. Self reflective RAG
Building on the theme of reflection, in this video, we explore how LangGraph, can be effectively leveraged for "flow engineering" in self-reflective RAG pipelines. LangGraph simplifies the process of designing and optimizing these pipelines, making it more accessible for researchers and practitioners.
Link: https://www.youtube.com/watch?v=pbAd8O1Lvm4&t=545s
50. Agents at Cloud Next’24
Google Cloud Next'24 just dropped a truckload of AI Agents across our universe of solutions. Dive into this video breakdown to catch all the AI antics and innovations from the event. It's AI-mazing!
Link: https://www.youtube.com/watch?v=-fW0v2aHoeQ&t=554s
51.Three pillars of Agentic workflows
At Sequoia Capital's AI Ascent, LangChain's Harrison Chase spills the tea on the future of AI agents and their leap into the real world. Buckle up for the ride as he pinpoints the holy trinity of agent evolution: planning, user experience, and memory.
Link: https://www.youtube.com/watch?v=pBBe1pk8hf4&t=130s
52. The Agent-astic Rise of AI
A new survey paper, aptly titled "The Landscape of Emerging AI Agent Architectures for Reasoning, Planning, and Tool Calling" (try saying that three times fast), dives into the exciting world of autonomous agents.
The paper throws down the gauntlet, questioning whether a lone wolf agent or a whole pack of them is the best approach. Single agents excel at well-defined tasks, while their multi-agent counterparts thrive on collaboration and diverse perspectives. It's like the Avengers versus Iron Man – teamwork makes the dream work, but sometimes you just need a billionaire genius in a flying suit!
Link: https://arxiv.org/pdf/2404.11584
53. Tool calling for Agents
Tool calling empowers developers to create advanced applications utilizing LLMs for accessing external resources. Providers like OpenAI, Gemini, and Anthropic have led the charge, prompting the demand for a standardized tool calling interface, now unveiled by Langchain for seamless provider switching.
Link: https://www.youtube.com/watch?v=zCwuAlpQKTM&t=7s
54. Can Language Models solve Olympiad Programming?
Brace yourselves for another brain-bending adventure from the minds behind the popular React paper!
Their latest masterpiece dives deep into the world of algorithmic reasoning with the USACO benchmark, featuring a whopping 307 mind-bending problems from the USA Computing Olympiad. Packed with top-notch unit tests, reference code, and expert analyses, this paper is a treasure trove for all those eager to push the limits of large language models.
Link: https://arxiv.org/abs/2404.10952
55. Vertex AI Agent Builder
At Next '24, Vertex AI unveiled Agent Builder, a low-code platform for crafting agent apps. Dive into this comprehensive guide to kickstart your journey with Agent Builder and explore the potential of agent-based applications!
Link: https://cloud.google.com/dialogflow/vertex/docs/quick/create-application
56. Will LLMs forever be trapped in chat interfaces?
Embark on a journey into AI's fresh frontiers! In this article, discover how AI devices are breaking free from chat interfaces, from funky Rabbit R1 to sleek Meta smart glasses. Are you ready for AI's evolution beyond the chatbox?
Link: https://www.oneusefulthing.org/p/freeing-the-chatbot?r=i5f7&utm_campaign=post&utm_medium=web&triedRedirect=true
57. Langsmith overview
Unlock the magic of LangSmith! Dive into a series of tutorials, crafted to guide you through every twist and turn of developing, testing, and deploying LLM applications, regardless of your LangChain affiliation.
Link: https://www.youtube.com/playlist?list=PLfaIDFEXuae2CjNiTeqXG5r8n9rld9qQu
58.Execution runtime for Autonomous Agents (GoEX)
'GoEX' is a groundbreaking runtime for autonomous LLM applications that breaks free from traditional code generation boundaries. Authored by Shishir G. Patil, Tianjun Zhang, Vivian Fang, and a stellar team, this paper delves into the future of LLMs actively engaging with tools and real-world applications.
The journey begins by reimagining human-LMM collaboration through post-facto validation, making code comprehension and validation more intuitive and efficient. With 'GoEX,' users can now confidently supervise LLM-generated outputs, thanks to innovative features like intuitive undo and damage confinement strategies.
Link: https://arxiv.org/abs/2404.06921
59.Compare and Contrast popular Agent Architectures (Reflexion, LATs, P&E, ReWOO, LLMCompiler)
In this video, you will explore six crucial concepts and explore five popular papers that unveil innovative ways to set up language model-based agents. From reflexion to execution, this tutorial has you covered with direct testing examples and valuable insights.
Link: https://www.youtube.com/watch?v=ZJlfF1ESXVw&list=PLmqn83GIhSInDdRKef6STtF9nb2H9eiY6&index=9
60. Agents at Google I/O event
Dive into the buzz surrounding Google I/O, where groundbreaking announcements like Project Astra and AI team mates showcase the rapid evolution of LLM agents. Discover the limitless potential of agentic workflows in this exciting showcase of innovation and discovery!
Link: https://sites.google.com/corp/google.com/io-2024-for-googlers/internal-coverage?authuser=0&utm_source=Moma+Now&utm_campaign=IO2024&utm_medium=googlernews
61. LLM’s spatial intelligence journey
Did you catch the awe-inspiring Project Astra demo? If yes, you might have definitely wondered what powered the assistant's responses. Dive into the quest for spatial intelligence in LLM vision and discover why it's the next frontier in AI. In this video, AI luminary Fei-Fei Li reveals the secrets behind spatial intelligence and its potential to revolutionize AI-human interactions.
Link: https://www.youtube.com/watch?v=y8NtMZ7VGmU
62.Anthropic unlocks the mystery of LLMs
In a groundbreaking study, Anthropic has delved deep into the intricate mechanisms of Large Language Models (LLMs), specifically focusing on Claude 3. Their pioneering research not only uncovers hidden patterns within these AI models but also provides crucial insights into addressing bias, safety, and autonomy concerns.
Link: https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html
63. Multi AI Agent systems with Crew AI
Ever wanted to build a multi-agent system? Here is an exclusive short course around this topic led by Joao Moura, the visionary creator behind the groundbreaking Crew AI framework. Discover the secrets to building robust multi-agent systems capable of tackling complex tasks with unparalleled efficiency.
What is so special about the Crew AI framework? Over 1,400,000 multi-agent crews have been powered by this cutting-edge framework in the past 7 days alone!
Link: https://www.deeplearning.ai/short-courses/multi-ai-agent-systems-with-crewai/
64. Self-correcting coding assistant from Mistral
Mistral just announced Codestral - a self-correcting code assistant well-versed in 80+ Programming Languages. Here is a detailed video tutorial from Langchain team around using the model.
Save time, reduce errors, and level up your coding game with Codestral!
Link: https://mistral.ai/news/codestral/
65. Multi AI agentic systems with AutoGen
Last week, you learned to build multi-agent systems using Crew AI. This week, you get to explore AutoGen, probably the first multi-agent framework to hit the market.
Implement agentic design patterns: Reflection, Tool use, Planning, and Multi-agent collaboration using AutoGen. You also get to learn directly from the creators of AutoGen, Chi Wang and Qingyun Wu.
Link: https://www.deeplearning.ai/short-courses/ai-agentic-design-patterns-with-autogen/
66. Lessons from a year of LLM adventures
The past year has seen LLMs reach new heights, becoming integral to real-world applications and attracting substantial investment. Despite the ease of entry, building effective AI products remains a challenging journey.
Here’s a glimpse of what Gen AI product builders have learned!
Link: https://applied-llms.org/
67.Build agentic systems with LangGraph
Last week, it was AutoGen and the week before it was Crew AI. This week, you get to explore LangGraph, a framework by Langchain that lets you build agentic system.
Discover LangGraph’s components for developing, debugging, and maintaining AI agents, and enhance agent performance with integrated search capabilities. Learn from LangChain founder Harrison Chase and Tavily founder Rotem Weiss.
Link: https://www.deeplearning.ai/short-courses/ai-agents-in-langgraph/
'''
os.environ["LANGCHAIN_API_KEY"] = "ls__92a67c6930624f93aa427f1c1ad3f59b"
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = "agenta"
generation_config = {
"temperature": 0.2,
"top_p": 0.95,
"top_k": 0,
"max_output_tokens": 8192,
}
safety_settings = [
{
"category": "HARM_CATEGORY_HARASSMENT",
"threshold": "BLOCK_MEDIUM_AND_ABOVE"
},
{
"category": "HARM_CATEGORY_HATE_SPEECH",
"threshold": "BLOCK_MEDIUM_AND_ABOVE"
},
{
"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT",
"threshold": "BLOCK_MEDIUM_AND_ABOVE"
},
{
"category": "HARM_CATEGORY_DANGEROUS_CONTENT",
"threshold": "BLOCK_MEDIUM_AND_ABOVE"
},
]
system_instruction = context
import re
model = genai.GenerativeModel(model_name="gemini-1.5-pro-latest",
generation_config=generation_config,
system_instruction=system_instruction,
safety_settings=safety_settings)
def model_response(text):
#model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(text)
return response.text
def respond(
message,
history: list[tuple[str, str]],
):
messages = [{"role": "system", "content": context}]
for val in history:
if val[0]:
messages.append({"role": "user", "content": val[0]})
if val[1]:
messages.append({"role": "assistant", "content": val[1]})
messages.append({"role": "user", "content": message})
#response = ""
response = model_response(messages)
return response
"""
For information on how to customize the ChatInterface, peruse the gradio docs: https://www.gradio.app/docs/chatinterface
"""
demo = gr.ChatInterface(
respond,
title="Agenta - Assistant to master the Agentville content",description="Helps you decipher the knowledge shared across 40+ Agentville newsletters", theme=gr.themes.Soft()
)
if __name__ == "__main__":
demo.launch()