---
title: Nimble
---

# NimbleSearchRetriever

 `NimbleSearchRetriever` enables developers to build RAG applications and AI Agents that can search, access, and retrieve online information from anywhere on the web.

 `NimbleSearchRetriever` harnesses Nimble's Data APIs to execute search queries and retrieve web data in an efficient, scalable, and effective fashion.
 It has two modes:

* **Search & Retrieve**: Execute a search query, get the top result URLs, and retrieve the text from those URLs.
* **Retrieve**: Provide a list of URLs, and retrieve the text/data from those URLs.

If you'd like to learn more about the underlying Nimble APIs, visit the [documentation here](https://docs.nimbleway.com/nimble-sdk/web-api/web-api-overview).

## Setup

To begin using `NimbleSearchRetriever`, you'll first need to open an account with Nimble and subscribe to a plan. Nimble offers free trials, [which you can register for here](https://app.nimbleway.com/signup?returnTo=/pipelines/nimbleapi).

For more information about available plans, see our [Pricing page.](https://www.nimbleway.com/pricing)

Once you have registered, you will need to enter the "Nimble API" pipeline in the ["Pipelines" page](https://app.nimbleway.com/pipelines) which will provide you with your API credentials (as a base64 token) that you will use to authenticate your retriever:

![example-image]()

Now, you can set your credential string as an environment variable so `NimbleSearchRetriever` will capture it automatically without having to pass it each time inline.

```python
import getpass
import os

os.environ["NIMBLE_API_KEY"] = getpass.getpass()
```

For more information about the Authentication process, see [Nimble APIs Authentication Documentation](https://docs.nimbleway.com/nimble-sdk/web-api/nimble-web-api-quick-start-guide/nimble-apis-authentication).

If you want to get automated tracing for individual queries, you can set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:

```python
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"
```

### Installation

The retriever is part of langchain providers which is included in the langchain-nimble package available on pypi.

```python
%pip install -U langchain-nimble langchain-openai
```

## Instantiation

Now we can instantiate our retriever:

```python
from langchain_nimble import NimbleSearchRetriever

retriever = NimbleSearchRetriever(k=3)
```

## Usage

`NimbleSearchRetriever` has these arguments:

* `k` (optional) integer - Number of results to return (less than or equal to 20)
* `api_key` (optional) string - Nimble's API key, can be sent directly when instantiating the retriever or with the environment variable (`NIMBLE_API_KEY`)
* `search_engine` (optional) string - The search engine your query will be executed through, you can choose from
  * `google_search` (default value) - Google's search engine
  * `bing_search` - Bing's search engine
  * `yandex_search` - Yandex search engine
* `render` (optional) boolean - Enables or disables Javascript rendering on the target page (if enabled the results might return more slowly)
* `locale` (optional) string - LCID standard locale used for the URL request. Alternatively, user can use auto for automatic locale based on country targeting.
* `country` (optional) string - Country used to access the target URL, use ISO Alpha-2 Country Codes i.e. US, DE, GB
* `parsing_type` (optional) string - The text structure of the returned `page_content`
  * `plain_text` (default value) - Extracts just the text from the html
  * `markdown` - Markdown format
  * `simplified_html` - Compressed version of the original html document (~8% of the orignial html size)
* `links` (optional) Array of strings - Array of links to the requested websites to scrape, if chosen will return the raw html content from these html **(THIS WILL ACTIVATE THE SECOND MODE)**

You can read more about each argument in [Nimble's docs](https://docs.nimbleway.com/nimble-sdk/web-api/vertical-endpoints/serp-api/real-time-search-request#request-options).

### Example of search & retrieve mode with a search query string

**Fetching a single document will result in the following:**

```python
import json

query = "Latest trends in artificial intelligence"
example_doc = retriever.invoke(query)[0]
print("Page Content: \n", json.dumps(example_doc.page_content, indent=2))
print("Metadata: \n", json.dumps(example_doc.metadata, indent=2))
```

```output
Page Content:
 "8 AI and machine learning trends to watch in 2025 | TechTarget\nSearch Enterprise AI\nSearch the TechTarget Network\nLogin\nRegister\nExplore the Network\nTechTarget Network\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Enterprise AI\nAI Business Strategies\nAI Careers\nAI Infrastructure\nAI Platforms\nAI Technologies\nMore Topics\nApplications of AI\nML Platforms\nOther Content\nNews\nFeatures\nTips\nWebinars\n2024 IT Salary Survey Results\nSponsored Sites\nMore\nAnswers\nConference Guides\nDefinitions\nOpinions\nPodcasts\nQuizzes\nTech Accelerators\nTutorials\nVideos\nFollow:\nHome\nAI business strategies\nTech Accelerator\nWhat is enterprise AI? A complete guide for businesses\nPrev\nNext\n8 jobs that AI can't replace and why\n10 top artificial intelligence certifications and courses for 2025\nDownload this guide1\nX\nFree Download\nA guide to artificial intelligence in the enterprise\nThis wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI's history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI's key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.\nFeature\n8 AI and machine learning trends to watch in 2025\nAI agents, multimodal models, an emphasis on real-world results -- learn about the top AI and machine learning trends and what they mean for businesses in 2025.\nShare this item with your network:\nBy\nLev Craig,\nSite Editor\nPublished: 03 Jan 2025\nGenerative AI is at a crossroads. It's now more than two years since ChatGPT's launch, and the initial optimism about AI's potential is decidedly tempered by an awareness of its limitations and costs.\nThe 2025 AI landscape reflects that complexity. While excitement still abounds -- particularly for emerging areas, like agentic AI and multimodal models -- it's also poised to be a year of growing pains.\nCompanies are increasingly looking for proven results from generative AI, rather than early-stage prototypes. That's no easy feat for a technology that's often expensive, error-prone and vulnerable to misuse. And regulators will need to balance innovation and safety, while keeping up with a fast-moving tech environment.\nHere are eight of the top AI trends to prepare for in 2025.\n1. Hype gives way to more pragmatic approaches\nSince 2022, there's been an explosion of interest and innovation in generative AI, but actual adoption remains inconsistent. Companies often struggle to move generative AI projects, whether internal productivity tools or customer-facing applications, from pilot to production.\nThis article is part of\nWhat is enterprise AI? A complete guide for businesses\nWhich also includes:\nHow can AI drive revenue? Here are 10 approaches\n8 jobs that AI can't replace and why\n8 AI and machine learning trends to watch in 2025\nAlthough many businesses have explored generative AI through proofs of concept, fewer have fully integrated it into their operations. In a September 2024 research report, Informa TechTarget's Enterprise Strategy Group found that, although over 90% of organizations had increased their generative AI use over the previous year, only 8% considered their initiatives mature.\n\"The most surprising thing for me [in 2024] is actually the lack of adoption that we're seeing,\" said Jen Stave, launch director for the Digital Data Design Institute at Harvard University. \"When you look across businesses, companies are investing in AI. They're building their own custom tools. They're buying off-the-shelf enterprise versions of the large language models (LLMs). But there really hasn't been this groundswell of adoption within companies.\"\nOne reason for this is AI's uneven impact across roles and job functions. Organizations are discovering what Stave termed the \"jagged technological frontier,\" where AI enhances productivity for some tasks or employees, while diminishing it for others. A junior analyst, for example, might significantly increase their output by using a tool that only bogs down a more experienced counterpart.\n\"Managers don't know where that line is, and employees don't know where that line is,\" Stave said. \"So, there's a lot of uncertainty and experimentation.\"\nDespite the sky-high levels of generative AI hype, the reality of slow adoption is hardly a surprise to anyone with experience in enterprise tech. In 2025, expect businesses to push harder for measurable outcomes from generative AI: reduced costs, demonstrable ROI and efficiency gains.\n2. Generative AI moves beyond chatbots\nWhen most laypeople hear the term generative AI, they think of tools like ChatGPT and Claude powered by LLMs. Early explorations from businesses, too, have tended to involve incorporating LLMs into products and services via chat interfaces. But, as the technology matures, AI developers, end users and business customers alike are looking beyond chatbots.\n\"People need to think more creatively about how to use these base tools and not just try to plop a chat window into everything,\" said Eric Sydell, founder and CEO of Vero AI, an AI and analytics platform.\nThis transition aligns with a broader trend: building software atop LLMs rather than deploying chatbots as standalone tools. Moving from chatbot interfaces to applications that use LLMs on the back end to summarize or parse unstructured data can help mitigate some of the issues that make generative AI difficult to scale.\n\"[A chatbot] can help an individual be more effective ... but it's very one on one,\" Sydell said. \"So, how do you scale that in an enterprise-grade way?\"\nHeading into 2025, some areas of AI development are starting to move away from text-based interfaces entirely. Increasingly, the future of AI looks to center around multimodal models, like OpenAI's text-to-video Sora and ElevenLabs' AI voice generator, which can handle nontext data types, such as audio, video and images.\n\"AI has become synonymous with large language models, but that's just one type of AI,\" Stave said. \"It's this multimodal approach to AI [where] we're going to start seeing some major technological advancements.\"\nRobotics is another avenue for developing AI that goes beyond textual conversations -- in this case, to interact with the physical world. Stave anticipates that foundation models for robotics could be even more transformative than the arrival of generative AI.\n\"Think about all of the different ways we interact with the physical world,\" she said. \"I mean, the applications are just infinite.\"\n3. AI agents are the next frontier\nThe second half of 2024 has seen growing interest in agentic AI models capable of independent action. Tools like Salesforce's Agentforce are designed to autonomously handle tasks for business users, managing workflows and taking care of routine actions, like scheduling and data analysis.\nAgentic AI is in its early stages. Human direction and oversight remain critical, and the scope of actions that can be taken is usually narrowly defined. But, even with those limitations, AI agents are attractive for a wide range of sectors.\nAutonomous functionality isn't totally new, of course; by now, it's a well-established cornerstone of enterprise software. The difference with AI agents lies in their adaptability: Unlike simple automation software, agents can adapt to new information in real time, respond to unexpected obstacles and make independent decisions.\nYet, that same independence also entails new risks. Grace Yee, senior director of ethical innovation at Adobe, warned of \"the harm that can come ... as agents can start, in some cases, acting upon your behalf to help with scheduling or do other tasks.\" Generative AI tools are notoriously prone to hallucinations, or generating false information -- what happens if an autonomous agent makes similar mistakes with immediate, real-world consequences?\nSydell cited similar concerns, noting that some use cases will raise more ethical issues than others. \"When you start to get into high-risk applications -- things that have the potential to harm or help individuals -- the standards have to be way higher,\" he said.\nCompared with generative AI, agentic AI offers greater autonomy and adaptability.\n4. Generative AI models become commodities\nThe generative AI landscape is evolving rapidly, with foundation models seemingly now a dime a dozen. As 2025 begins, the competitive edge is moving away from which company has the best model to which businesses excel at fine-tuning pretrained models or developing specialized tools to layer on top of them.\nIn a recent newsletter, analyst Benedict Evans compared the boom in generative AI models to the PC industry of the late 1980s and 1990s. In that era, performance comparisons focused on incremental improvements in specs like CPU speed or memory, similar to how today's generative AI models are evaluated on niche technical benchmarks.\nOver time, however, these distinctions faded as the market reached a good-enough baseline, with differentiation shifting to factors such as cost, UX and ease of integration. Foundation models seem to be on a similar trajectory: As performance converges, advanced models are becoming more or less interchangeable for many use cases.\nIn a commoditized model landscape, the focus is no longer number of parameters or slightly better performance on a certain benchmark, but instead usability, trust and interoperability with legacy systems. In that environment, AI companies with established ecosystems, user-friendly tools and competitive pricing are likely to take the lead.\n5. AI applications and data sets become more domain-specific\nLeading AI labs, like OpenAI and Anthropic, claim to be pursuing the ambitious goal of creating artificial general intelligence (AGI), commonly defined as AI that can perform any task a human can. But AGI -- or even the comparatively limited capabilities of today's foundation models -- is far from necessary for most business applications.\nFor enterprises, interest in narrow, highly customized models started almost as soon as the generative AI hype cycle began. A narrowly tailored business application simply doesn't require the degree of versatility necessary for a consumer-facing chatbot.\n\"There's a lot of focus on the general-purpose AI models,\" Yee said. \"But I think what is more important is really thinking through: How are we using that technology ... and is that use case a high-risk use case?\"\nIn short, businesses should consider more than what technology is being deployed and instead think more deeply about who will ultimately be using it and how. \"Who's the audience?\" Yee said. \"What's the intended use case? What's the domain it's being used in?\"\nAlthough, historically, larger data sets have driven model performance improvements, researchers and practitioners are debating whether this trend can hold. Some have suggested that, for certain tasks and populations, model performance plateaus -- or even worsens -- as algorithms are fed more data.\n\"The motivation for scraping ever-larger data sets may be based on fundamentally flawed assumptions about model performance,\" authors Fernando Diaz and Michael Madaio wrote in their paper \"Scaling Laws Do Not Scale.\" \"That is, models may not, in fact, continue to improve as the data sets get larger -- at least not for all people or communities impacted by those models.\"\n6. AI literacy becomes essential\nGenerative AI's ubiquity has made AI literacy an in-demand skill for everyone from executives to developers to everyday employees. That means knowing how to use these tools, assess their outputs and -- perhaps most importantly -- navigate their limitations.\nNotably, although AI and machine learning talent remains in demand, developing AI literacy doesn't need to mean learning to code or train models. \"You don't necessarily have to be an AI engineer to understand these tools and how to use them and whether to use them,\" Sydell said. \"Experimenting, exploring, using the tools is massively helpful.\"\nAmid the persistent generative AI hype, it can be easy to forget that the technology is still relatively new. Many people either haven't used it at all or don't use it regularly: A recent research paper found that, as of August 2024, less than half of Americans aged 18 to 64 use generative AI, and just over a quarter use it at work.\nThat's a faster pace of adoption compared with the PC or the internet, as the paper's authors pointed out, but it's still not a majority. There's also a gap between businesses' official stances on generative AI and how real workers are using it in their day-to-day tasks.\n\"If you look at how many companies say they're using it, it's actually a pretty low share who are formally incorporating it into their operations,\" David Deming, professor at Harvard University and one of the paper's authors, told The Harvard Gazette. \"People are using it informally for a lot of different purposes, to help write emails, using it to look up things, using it to obtain documentation on how to do something.\"\nStave sees a role for both companies and educational institutions in closing the AI skills gap. \"When you look at companies, they understand the on-the-job training that workers need,\" she said. \"They always have because that's where the work takes place.\"\nUniversities, in contrast, are increasingly offering skill-based, rather than role-based, education that's available on an ongoing basis and applicable across multiple jobs. \"The business landscape is changing so fast. You can't just quit and go back and get a master's and learn everything new,\" Stave said. \"We have to figure out how to modularize the learning and get it out to people in real time.\"\n7. Businesses adjust to an evolving regulatory environment\nAs 2024 progressed, companies were faced with a fragmented and rapidly changing regulatory landscape. Whereas the EU set new compliance standards with the passage of the AI Act in 2024, the U.S. remains comparatively unregulated -- a trend likely to continue in 2025 under the Trump administration.\n\"One thing that I think is pretty inadequate right now is legislation [and] regulation around these tools,\" Sydell said. \"It seems like that's not going to happen anytime soon at this point.\" Stave likewise said she's \"not expecting significant regulation from the new administration.\"\nThat light-touch approach could promote AI development and innovation, but the lack of accountability also raises concerns about safety and fairness. Yee sees a need for regulation that protects the integrity of online speech, such as giving users access to provenance information about internet content, as well as anti-impersonation laws to protect creators.\nTo minimize harm without stifling innovation, Yee said she'd like to see regulation that can be responsive to the risk level of a specific AI application. Under a tiered risk framework, she said, \"low-risk AI applications can go to market faster, [while] high-risk AI applications go through a more diligent process.\"\nStave also pointed out that minimal oversight in the U.S. doesn't necessarily mean that companies will operate in a fully unregulated environment. In the absence of a cohesive global standard, large incumbents operating in multiple regions typically end up adhering to the most stringent regulations by default. In this way, the EU's AI Act could end up functioning similarly to GDPR, setting de facto standards for companies building or deploying AI worldwide.\n8. AI-related security concerns escalate\nThe widespread availability of generative AI, often at low or no cost, gives threat actors unprecedented access to tools for facilitating cyberattacks. That risk is poised to increase in 2025 as multimodal models become more sophisticated and readily accessible.\nIn a recent public warning, the FBI described several ways cybercriminals are using generative AI for phishing scams and financial fraud. For example, an attacker targeting victims via a deceptive social media profile might write convincing bio text and direct messages with an LLM, while using AI-generated fake photos to lend credibility to the false identity.\nAI video and audio pose a growing threat, too. Historically, models have been limited by telltale signs of inauthenticity, like robotic-sounding voices or lagging, glitchy video. While today's versions aren't perfect, they're significantly better, especially if an anxious or time-pressured victim isn't looking or listening too closely.\nAudio generators can enable hackers to impersonate a victim's trusted contacts, such as a spouse or colleague. Video generation has so far been less common, as it's more expensive and offers more opportunities for error. But, in a highly publicized incident earlier this year, scammers successfully impersonated a company's CFO and other staff members on a video call using deepfakes, leading a finance worker to send $25 million to fraudulent accounts.\nOther security risks are tied to vulnerabilities within models themselves, rather than social engineering. Adversarial machine learning and data poisoning, where inputs and training data are intentionally designed to mislead or corrupt models, can damage AI systems themselves. To account for these risks, businesses will need to treat AI security as a core part of their overall cybersecurity strategies.\nLev Craig covers AI and machine learning as site editor for TechTarget's Enterprise AI site. Craig graduated from Harvard University with a bachelor's degree in English and has previously written about enterprise IT, software development and cybersecurity.\nNext Steps\nThe year in AI: Catch up on the top AI news of 2024\nWays enterprise AI will transform IT infrastructure this year\nRelated Resources\nAI business strategies for successful transformation\n\u2013Video\nRedesigning Productivity in the Age of Cognitive Acceleration\n\u2013Replay\nDig Deeper on AI business strategies\nGoogle Gemini 2.0 explained: Everything you need to know\nBy: Sean\u00a0Kerner\nServiceNow intros AI agent studio and orchestrator\nBy: Esther\u00a0Shittu\nNvidia's new model aims to move GenAI to physical world\nBy: Esther\u00a0Shittu\nNot-so-obvious AI predictions for 2025\nSponsored News\nPower Your Generative AI Initiatives With High-Performance, Reliable, ...\n\u2013Dell Technologies and Intel\nPrivate AI Demystified\n\u2013Equinix\nSustainability, AI and Dell PowerEdge Servers\n\u2013Dell Technologies and Intel\nSee More\nRelated Content\nNvidia's new model aims to move GenAI to physical ...\n\u2013 Search Enterprise AI\nOracle boosts generative AI service and intros new ...\n\u2013 Search Enterprise AI\nNew Google Gemini AI tie-ins dig into local codebases\n\u2013 Search Software Quality\nLatest TechTarget resources\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Business Analytics\nDomo platform a difference-maker for check guarantee vendor\nIngo Money succeeded with the analytics specialist's suite after years of struggling to gain insights from spreadsheets and a ...\nAgentic AI, data as a product among growing analytics trends\nCollibra's founder and chief data citizen reveals his predictions for 2025, and underpinning all is the need for strong ...\nTrusted data at the core of successful GenAI adoption\nA new study finds that only a third of organizations are successfully developing GenAI tools. The problem preventing success is ...\nSearch CIO\nWhy diversity in tech teams is important\nDiversity is key to driving organizational success, yet DEI initiatives have become a controversial topic. Learn how diversity in...\nBusinesses need to prepare as EU AI Act enforcement begins\nThe EU AI Act's Sunday enforcement deadline will be a test for EU enforcers as they begin assessing companies for compliance.\nU.S. freeze on foreign aid may give China a leg up\nAs the U.S. steps back on foreign aid, experts worry China may step in to fill the void.\nSearch Data Management\nTop trends in big data for 2025 and beyond\nBig data initiatives are being affected by various trends. Here are seven notable ones and what they mean for organizations ...\nPinecone provides Assistant for generative AI development\nThe vector database specialist is expanding beyond managing data with a suite of APIs and other tools that enable users to tap ...\n18 top data catalog software tools to consider using in 2025\nNumerous tools can be used to build and manage data catalogs. Here's a look at the key features, capabilities and components of ...\nSearch ERP\nDemand vs. supply planning: Learn about the differences\nDemand planning helps companies predict future demand for goods, while supply planning enables companies to order enough goods. ...\nTop 10 essential skills for ERP professionals in 2025\nBoth hard and soft skills are essential for ERP professionals, including project management and being up to date with technology.\nAcumatica cloud ERP aims for industry-focused AI value\nNew AI functionality in the company's cloud ERP platform could help customers evolve back-office transactional systems into ...\nAbout Us\nEditorial Ethics Policy\nMeet The Editors\nContact Us\nAdvertisers\nPartner with Us\nMedia Kit\nCorporate Site\nContributors\nReprints\nAnswers\nDefinitions\nE-Products\nEvents\nFeatures\nGuides\nOpinions\nPhoto Stories\nQuizzes\nTips\nTutorials\nVideos\nAll Rights Reserved,\nCopyright 2018 - 2025, TechTarget\nPrivacy Policy\nCookie Preferences\nCookie Preferences\nDo Not Sell or Share My Personal Information\nClose"
Metadata:
 {
  "title": "8 AI and machine learning trends to watch in 2025",
  "snippet": "Jan 3, 2025 \u2014 1. Hype gives way to more pragmatic approaches \u00b7 2. Generative AI moves beyond chatbots \u00b7 3. AI agents are the next frontier \u00b7 4. Generative AI\u00a0...",
  "url": "https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends",
  "position": 1,
  "entity_type": "OrganicResult"
}
```

**While invoking the same query without taking just one document will result with:**

```python
retriever.invoke(query)
```

```output
[Document(metadata={'title': '8 AI and machine learning trends to watch in 2025', 'snippet': 'Jan 3, 2025 — 1. Hype gives way to more pragmatic approaches · 2. Generative AI moves beyond chatbots · 3. AI agents are the next frontier · 4. Generative AI\xa0...', 'url': 'https://www.techtarget.com/searchenterpriseai/tip/9-top-AI-and-machine-learning-trends', 'position': 1, 'entity_type': 'OrganicResult'}, page_content='8 AI and machine learning trends to watch in 2025 | TechTarget\nSearch Enterprise AI\nSearch the TechTarget Network\nLogin\nRegister\nExplore the Network\nTechTarget Network\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Enterprise AI\nAI Business Strategies\nAI Careers\nAI Infrastructure\nAI Platforms\nAI Technologies\nMore Topics\nApplications of AI\nML Platforms\nOther Content\nNews\nFeatures\nTips\nWebinars\n2024 IT Salary Survey Results\nSponsored Sites\nMore\nAnswers\nConference Guides\nDefinitions\nOpinions\nPodcasts\nQuizzes\nTech Accelerators\nTutorials\nVideos\nFollow:\nHome\nAI business strategies\nTech Accelerator\nWhat is enterprise AI? A complete guide for businesses\nPrev\nNext\n8 jobs that AI can\'t replace and why\n10 top artificial intelligence certifications and courses for 2025\nDownload this guide1\nX\nFree Download\nA guide to artificial intelligence in the enterprise\nThis wide-ranging guide to artificial intelligence in the enterprise provides the building blocks for becoming successful business consumers of AI technologies. It starts with introductory explanations of AI\'s history, how AI works and the main types of AI. The importance and impact of AI is covered next, followed by information on AI\'s key benefits and risks, current and potential AI use cases, building a successful AI strategy, steps for implementing AI tools in the enterprise and technological breakthroughs that are driving the field forward. Throughout the guide, we include hyperlinks to TechTarget articles that provide more detail and insights on the topics discussed.\nFeature\n8 AI and machine learning trends to watch in 2025\nAI agents, multimodal models, an emphasis on real-world results -- learn about the top AI and machine learning trends and what they mean for businesses in 2025.\nShare this item with your network:\nBy\nLev Craig,\nSite Editor\nPublished: 03 Jan 2025\nGenerative AI is at a crossroads. It\'s now more than two years since ChatGPT\'s launch, and the initial optimism about AI\'s potential is decidedly tempered by an awareness of its limitations and costs.\nThe 2025 AI landscape reflects that complexity. While excitement still abounds -- particularly for emerging areas, like agentic AI and multimodal models -- it\'s also poised to be a year of growing pains.\nCompanies are increasingly looking for proven results from generative AI, rather than early-stage prototypes. That\'s no easy feat for a technology that\'s often expensive, error-prone and vulnerable to misuse. And regulators will need to balance innovation and safety, while keeping up with a fast-moving tech environment.\nHere are eight of the top AI trends to prepare for in 2025.\n1. Hype gives way to more pragmatic approaches\nSince 2022, there\'s been an explosion of interest and innovation in generative AI, but actual adoption remains inconsistent. Companies often struggle to move generative AI projects, whether internal productivity tools or customer-facing applications, from pilot to production.\nThis article is part of\nWhat is enterprise AI? A complete guide for businesses\nWhich also includes:\nHow can AI drive revenue? Here are 10 approaches\n8 jobs that AI can\'t replace and why\n8 AI and machine learning trends to watch in 2025\nAlthough many businesses have explored generative AI through proofs of concept, fewer have fully integrated it into their operations. In a September 2024 research report, Informa TechTarget\'s Enterprise Strategy Group found that, although over 90% of organizations had increased their generative AI use over the previous year, only 8% considered their initiatives mature.\n"The most surprising thing for me [in 2024] is actually the lack of adoption that we\'re seeing," said Jen Stave, launch director for the Digital Data Design Institute at Harvard University. "When you look across businesses, companies are investing in AI. They\'re building their own custom tools. They\'re buying off-the-shelf enterprise versions of the large language models (LLMs). But there really hasn\'t been this groundswell of adoption within companies."\nOne reason for this is AI\'s uneven impact across roles and job functions. Organizations are discovering what Stave termed the "jagged technological frontier," where AI enhances productivity for some tasks or employees, while diminishing it for others. A junior analyst, for example, might significantly increase their output by using a tool that only bogs down a more experienced counterpart.\n"Managers don\'t know where that line is, and employees don\'t know where that line is," Stave said. "So, there\'s a lot of uncertainty and experimentation."\nDespite the sky-high levels of generative AI hype, the reality of slow adoption is hardly a surprise to anyone with experience in enterprise tech. In 2025, expect businesses to push harder for measurable outcomes from generative AI: reduced costs, demonstrable ROI and efficiency gains.\n2. Generative AI moves beyond chatbots\nWhen most laypeople hear the term generative AI, they think of tools like ChatGPT and Claude powered by LLMs. Early explorations from businesses, too, have tended to involve incorporating LLMs into products and services via chat interfaces. But, as the technology matures, AI developers, end users and business customers alike are looking beyond chatbots.\n"People need to think more creatively about how to use these base tools and not just try to plop a chat window into everything," said Eric Sydell, founder and CEO of Vero AI, an AI and analytics platform.\nThis transition aligns with a broader trend: building software atop LLMs rather than deploying chatbots as standalone tools. Moving from chatbot interfaces to applications that use LLMs on the back end to summarize or parse unstructured data can help mitigate some of the issues that make generative AI difficult to scale.\n"[A chatbot] can help an individual be more effective ... but it\'s very one on one," Sydell said. "So, how do you scale that in an enterprise-grade way?"\nHeading into 2025, some areas of AI development are starting to move away from text-based interfaces entirely. Increasingly, the future of AI looks to center around multimodal models, like OpenAI\'s text-to-video Sora and ElevenLabs\' AI voice generator, which can handle nontext data types, such as audio, video and images.\n"AI has become synonymous with large language models, but that\'s just one type of AI," Stave said. "It\'s this multimodal approach to AI [where] we\'re going to start seeing some major technological advancements."\nRobotics is another avenue for developing AI that goes beyond textual conversations -- in this case, to interact with the physical world. Stave anticipates that foundation models for robotics could be even more transformative than the arrival of generative AI.\n"Think about all of the different ways we interact with the physical world," she said. "I mean, the applications are just infinite."\n3. AI agents are the next frontier\nThe second half of 2024 has seen growing interest in agentic AI models capable of independent action. Tools like Salesforce\'s Agentforce are designed to autonomously handle tasks for business users, managing workflows and taking care of routine actions, like scheduling and data analysis.\nAgentic AI is in its early stages. Human direction and oversight remain critical, and the scope of actions that can be taken is usually narrowly defined. But, even with those limitations, AI agents are attractive for a wide range of sectors.\nAutonomous functionality isn\'t totally new, of course; by now, it\'s a well-established cornerstone of enterprise software. The difference with AI agents lies in their adaptability: Unlike simple automation software, agents can adapt to new information in real time, respond to unexpected obstacles and make independent decisions.\nYet, that same independence also entails new risks. Grace Yee, senior director of ethical innovation at Adobe, warned of "the harm that can come ... as agents can start, in some cases, acting upon your behalf to help with scheduling or do other tasks." Generative AI tools are notoriously prone to hallucinations, or generating false information -- what happens if an autonomous agent makes similar mistakes with immediate, real-world consequences?\nSydell cited similar concerns, noting that some use cases will raise more ethical issues than others. "When you start to get into high-risk applications -- things that have the potential to harm or help individuals -- the standards have to be way higher," he said.\nCompared with generative AI, agentic AI offers greater autonomy and adaptability.\n4. Generative AI models become commodities\nThe generative AI landscape is evolving rapidly, with foundation models seemingly now a dime a dozen. As 2025 begins, the competitive edge is moving away from which company has the best model to which businesses excel at fine-tuning pretrained models or developing specialized tools to layer on top of them.\nIn a recent newsletter, analyst Benedict Evans compared the boom in generative AI models to the PC industry of the late 1980s and 1990s. In that era, performance comparisons focused on incremental improvements in specs like CPU speed or memory, similar to how today\'s generative AI models are evaluated on niche technical benchmarks.\nOver time, however, these distinctions faded as the market reached a good-enough baseline, with differentiation shifting to factors such as cost, UX and ease of integration. Foundation models seem to be on a similar trajectory: As performance converges, advanced models are becoming more or less interchangeable for many use cases.\nIn a commoditized model landscape, the focus is no longer number of parameters or slightly better performance on a certain benchmark, but instead usability, trust and interoperability with legacy systems. In that environment, AI companies with established ecosystems, user-friendly tools and competitive pricing are likely to take the lead.\n5. AI applications and data sets become more domain-specific\nLeading AI labs, like OpenAI and Anthropic, claim to be pursuing the ambitious goal of creating artificial general intelligence (AGI), commonly defined as AI that can perform any task a human can. But AGI -- or even the comparatively limited capabilities of today\'s foundation models -- is far from necessary for most business applications.\nFor enterprises, interest in narrow, highly customized models started almost as soon as the generative AI hype cycle began. A narrowly tailored business application simply doesn\'t require the degree of versatility necessary for a consumer-facing chatbot.\n"There\'s a lot of focus on the general-purpose AI models," Yee said. "But I think what is more important is really thinking through: How are we using that technology ... and is that use case a high-risk use case?"\nIn short, businesses should consider more than what technology is being deployed and instead think more deeply about who will ultimately be using it and how. "Who\'s the audience?" Yee said. "What\'s the intended use case? What\'s the domain it\'s being used in?"\nAlthough, historically, larger data sets have driven model performance improvements, researchers and practitioners are debating whether this trend can hold. Some have suggested that, for certain tasks and populations, model performance plateaus -- or even worsens -- as algorithms are fed more data.\n"The motivation for scraping ever-larger data sets may be based on fundamentally flawed assumptions about model performance," authors Fernando Diaz and Michael Madaio wrote in their paper "Scaling Laws Do Not Scale." "That is, models may not, in fact, continue to improve as the data sets get larger -- at least not for all people or communities impacted by those models."\n6. AI literacy becomes essential\nGenerative AI\'s ubiquity has made AI literacy an in-demand skill for everyone from executives to developers to everyday employees. That means knowing how to use these tools, assess their outputs and -- perhaps most importantly -- navigate their limitations.\nNotably, although AI and machine learning talent remains in demand, developing AI literacy doesn\'t need to mean learning to code or train models. "You don\'t necessarily have to be an AI engineer to understand these tools and how to use them and whether to use them," Sydell said. "Experimenting, exploring, using the tools is massively helpful."\nAmid the persistent generative AI hype, it can be easy to forget that the technology is still relatively new. Many people either haven\'t used it at all or don\'t use it regularly: A recent research paper found that, as of August 2024, less than half of Americans aged 18 to 64 use generative AI, and just over a quarter use it at work.\nThat\'s a faster pace of adoption compared with the PC or the internet, as the paper\'s authors pointed out, but it\'s still not a majority. There\'s also a gap between businesses\' official stances on generative AI and how real workers are using it in their day-to-day tasks.\n"If you look at how many companies say they\'re using it, it\'s actually a pretty low share who are formally incorporating it into their operations," David Deming, professor at Harvard University and one of the paper\'s authors, told The Harvard Gazette. "People are using it informally for a lot of different purposes, to help write emails, using it to look up things, using it to obtain documentation on how to do something."\nStave sees a role for both companies and educational institutions in closing the AI skills gap. "When you look at companies, they understand the on-the-job training that workers need," she said. "They always have because that\'s where the work takes place."\nUniversities, in contrast, are increasingly offering skill-based, rather than role-based, education that\'s available on an ongoing basis and applicable across multiple jobs. "The business landscape is changing so fast. You can\'t just quit and go back and get a master\'s and learn everything new," Stave said. "We have to figure out how to modularize the learning and get it out to people in real time."\n7. Businesses adjust to an evolving regulatory environment\nAs 2024 progressed, companies were faced with a fragmented and rapidly changing regulatory landscape. Whereas the EU set new compliance standards with the passage of the AI Act in 2024, the U.S. remains comparatively unregulated -- a trend likely to continue in 2025 under the Trump administration.\n"One thing that I think is pretty inadequate right now is legislation [and] regulation around these tools," Sydell said. "It seems like that\'s not going to happen anytime soon at this point." Stave likewise said she\'s "not expecting significant regulation from the new administration."\nThat light-touch approach could promote AI development and innovation, but the lack of accountability also raises concerns about safety and fairness. Yee sees a need for regulation that protects the integrity of online speech, such as giving users access to provenance information about internet content, as well as anti-impersonation laws to protect creators.\nTo minimize harm without stifling innovation, Yee said she\'d like to see regulation that can be responsive to the risk level of a specific AI application. Under a tiered risk framework, she said, "low-risk AI applications can go to market faster, [while] high-risk AI applications go through a more diligent process."\nStave also pointed out that minimal oversight in the U.S. doesn\'t necessarily mean that companies will operate in a fully unregulated environment. In the absence of a cohesive global standard, large incumbents operating in multiple regions typically end up adhering to the most stringent regulations by default. In this way, the EU\'s AI Act could end up functioning similarly to GDPR, setting de facto standards for companies building or deploying AI worldwide.\n8. AI-related security concerns escalate\nThe widespread availability of generative AI, often at low or no cost, gives threat actors unprecedented access to tools for facilitating cyberattacks. That risk is poised to increase in 2025 as multimodal models become more sophisticated and readily accessible.\nIn a recent public warning, the FBI described several ways cybercriminals are using generative AI for phishing scams and financial fraud. For example, an attacker targeting victims via a deceptive social media profile might write convincing bio text and direct messages with an LLM, while using AI-generated fake photos to lend credibility to the false identity.\nAI video and audio pose a growing threat, too. Historically, models have been limited by telltale signs of inauthenticity, like robotic-sounding voices or lagging, glitchy video. While today\'s versions aren\'t perfect, they\'re significantly better, especially if an anxious or time-pressured victim isn\'t looking or listening too closely.\nAudio generators can enable hackers to impersonate a victim\'s trusted contacts, such as a spouse or colleague. Video generation has so far been less common, as it\'s more expensive and offers more opportunities for error. But, in a highly publicized incident earlier this year, scammers successfully impersonated a company\'s CFO and other staff members on a video call using deepfakes, leading a finance worker to send $25 million to fraudulent accounts.\nOther security risks are tied to vulnerabilities within models themselves, rather than social engineering. Adversarial machine learning and data poisoning, where inputs and training data are intentionally designed to mislead or corrupt models, can damage AI systems themselves. To account for these risks, businesses will need to treat AI security as a core part of their overall cybersecurity strategies.\nLev Craig covers AI and machine learning as site editor for TechTarget\'s Enterprise AI site. Craig graduated from Harvard University with a bachelor\'s degree in English and has previously written about enterprise IT, software development and cybersecurity.\nNext Steps\nThe year in AI: Catch up on the top AI news of 2024\nWays enterprise AI will transform IT infrastructure this year\nRelated Resources\nAI business strategies for successful transformation\n–Video\nRedesigning Productivity in the Age of Cognitive Acceleration\n–Replay\nDig Deeper on AI business strategies\nGoogle Gemini 2.0 explained: Everything you need to know\nBy: Sean\xa0Kerner\nServiceNow intros AI agent studio and orchestrator\nBy: Esther\xa0Shittu\nNvidia\'s new model aims to move GenAI to physical world\nBy: Esther\xa0Shittu\nNot-so-obvious AI predictions for 2025\nSponsored News\nPower Your Generative AI Initiatives With High-Performance, Reliable, ...\n–Dell Technologies and Intel\nPrivate AI Demystified\n–Equinix\nSustainability, AI and Dell PowerEdge Servers\n–Dell Technologies and Intel\nSee More\nRelated Content\nNvidia\'s new model aims to move GenAI to physical ...\n– Search Enterprise AI\nOracle boosts generative AI service and intros new ...\n– Search Enterprise AI\nNew Google Gemini AI tie-ins dig into local codebases\n– Search Software Quality\nLatest TechTarget resources\nBusiness Analytics\nCIO\nData Management\nERP\nSearch Business Analytics\nDomo platform a difference-maker for check guarantee vendor\nIngo Money succeeded with the analytics specialist\'s suite after years of struggling to gain insights from spreadsheets and a ...\nAgentic AI, data as a product among growing analytics trends\nCollibra\'s founder and chief data citizen reveals his predictions for 2025, and underpinning all is the need for strong ...\nTrusted data at the core of successful GenAI adoption\nA new study finds that only a third of organizations are successfully developing GenAI tools. The problem preventing success is ...\nSearch CIO\nWhy diversity in tech teams is important\nDiversity is key to driving organizational success, yet DEI initiatives have become a controversial topic. Learn how diversity in...\nBusinesses need to prepare as EU AI Act enforcement begins\nThe EU AI Act\'s Sunday enforcement deadline will be a test for EU enforcers as they begin assessing companies for compliance.\nU.S. freeze on foreign aid may give China a leg up\nAs the U.S. steps back on foreign aid, experts worry China may step in to fill the void.\nSearch Data Management\nTop trends in big data for 2025 and beyond\nBig data initiatives are being affected by various trends. Here are seven notable ones and what they mean for organizations ...\nPinecone provides Assistant for generative AI development\nThe vector database specialist is expanding beyond managing data with a suite of APIs and other tools that enable users to tap ...\n18 top data catalog software tools to consider using in 2025\nNumerous tools can be used to build and manage data catalogs. Here\'s a look at the key features, capabilities and components of ...\nSearch ERP\nDemand vs. supply planning: Learn about the differences\nDemand planning helps companies predict future demand for goods, while supply planning enables companies to order enough goods. ...\nTop 10 essential skills for ERP professionals in 2025\nBoth hard and soft skills are essential for ERP professionals, including project management and being up to date with technology.\nAcumatica cloud ERP aims for industry-focused AI value\nNew AI functionality in the company\'s cloud ERP platform could help customers evolve back-office transactional systems into ...\nAbout Us\nEditorial Ethics Policy\nMeet The Editors\nContact Us\nAdvertisers\nPartner with Us\nMedia Kit\nCorporate Site\nContributors\nReprints\nAnswers\nDefinitions\nE-Products\nEvents\nFeatures\nGuides\nOpinions\nPhoto Stories\nQuizzes\nTips\nTutorials\nVideos\nAll Rights Reserved,\nCopyright 2018 - 2025, TechTarget\nPrivacy Policy\nCookie Preferences\nCookie Preferences\nDo Not Sell or Share My Personal Information\nClose'),
 Document(metadata={'title': '5 AI Trends to Watch in 2025', 'snippet': 'Jan 6, 2025 — 5 trends in artificial intelligence · 1. Generative AI and democratization · 2. AI for workplace productivity · 3. Multimodal AI · 4. AI in science\xa0...', 'url': 'https://www.coursera.org/articles/ai-trends', 'position': 2, 'entity_type': 'OrganicResult'}, page_content="5 AI Trends to Watch in 2025 | Coursera\nFor IndividualsFor BusinessesFor UniversitiesFor Governments  ExploreOnline DegreesCareersLog InJoin for Free  0DataAI and Machine Learning5 AI Trends to Watch in 20255 AI Trends to Watch in 2025Written by Coursera Staff • Updated on Jan 6, 2025Get to know some of the top trends in artificial intelligence in 2024.  Artificial intelligence (AI) has taken the technology industry by storm, and it’s only growing from here. According to PricewaterhouseCoopers, 73 percent of US companies use AI in some capacity in their business [1].One of the most recent trends is generative AI, which has the potential to generate trillions of dollars in value across industries [2]. People worldwide have begun to incorporate generative AI into their workflows, adding to the popularity and penetration of AI.Learn about some of the top trends in AI.5 trends in artificial intelligenceArtificial intelligence is quickly transforming how we live and the business landscape in which we work. Wondering what some of the potential impacts of this exciting technology might be?Here are five of the top AI trends you can expect to see in 2024.1. Generative AI and democratizationGenerative AI is arguably the biggest trend in AI this year. When ChatGPT and other text and image generators became accessible to the general public, it was widely used and adopted by business teams worldwide. Along with this is the democratization of AI, enabling it to be available to everyone—even those without technical knowledge.\xa0Generative AI is just one example of democratization. Hundreds of AI tools today allow us to create content faster, translate between languages, and populate search engines. It is changing how we communicate with each other, whether it’s between friends or between the media and the general public.Read more: How To Write ChatGPT Prompts: Your 2024 Guide2. AI for workplace productivityAnother trend we see in AI is its place in workplace productivity. Artificial intelligence can speed up and enhance how we work—in particular, how it automates time-consuming or repetitive tasks. Whether inputting data in a spreadsheet, writing an outline for a business plan, or controlling quality at a manufacturing plant, AI has massive potential to increase our productivity at work.For those who may be concerned about AI replacing jobs, this technology is often simply acting as a tool for automating repetition, leaving room for humans to make space for creativity, emotional intelligence, and moral judgment.3. Multimodal AIMany large language models (LLMs) process only text data. Multimodel models in AI can grasp information from different data types, like audio, video, and images, in addition to text. This technology is enabling search and content creation tools to become more seamless and intuitive and integrate more easily into other applications we already use.\xa0For example, iPhones can now figure out who and what objects are in your photographs because they can process images, metadata text, and search data. Similar to how a human can look at a photo and identify what’s in it, multimodals enable that same characteristic.4. AI in science and health careBesides AI’s influence in the business workplace, AI tools have great potential in science and health care. Researchers, such as those at Microsoft, are now using AI to build tools to predict weather, estimate carbon emissions, and enable sustainable farming practices [3]. This trend aims to address and mitigate the effects of climate change.\xa0Chatbots are being deployed in agriculture and health care, to help farmers identify a type of weed and to help medical professionals diagnose patients. While the accuracy of this AI is in progress, these steps can accelerate scientific discoveries and medical breakthroughs.AI in computer science: AI has become an in-demand skill for computer science professionals. You can get ahead of the curve by learning to leverage an AI coding partner for efficiency with Microsoft's Copilot for Software Development Specialization:5. Regulation and ethicsWith the proliferation of AI worldwide, the trend of mitigating any risks associated with AI is paramount. Government agencies and organizations like OpenAI are ensuring AI is used and deployed responsibly and ethically. In March 2024, the European Union debated a landmark comprehensive AI bill designed to regulate AI and address concerns for consumers. It is expected to become law this year.If AI is not regulated, data manipulation, misinformation, bias, and privacy risks can arise and pose greater societal risks. For example, tools can be susceptible to discrimination or legal risk if AI doesn’t collect data representative of a population. Generators like ChatGPT pull information from internet searches worldwide, but companies and publications have sued OpenAI for copyright infringement.Read more: AI Ethics: What It Is and Why It MattersTrends in AI security\nCybersecurity is a major concern for AI, particularly as more and more business processes rely on computing resources with access to vast amounts of sensitive information. Some of the top cybersecurity artificial intelligence trends include:\nPrivacy concerns: Generative models can help organizations be more productive, but business owners should ensure platforms are secure before sharing private information or trade secrets with them.  Data breaches: While AI-driven systems can be used to better detect data breaches, they can also facilitate them. Improved analytics: AI can help organizations improve their security with improved analytics, capable of spotting trends in vast amounts of incident report data.\nRead more: AI in Cybersecurity: How Businesses are Adapting  Start learning AI todayAI is quickly changing how we work today. Learn the skills you need to thrive alongside this transformative technology by building your AI skills on Coursera today.Enroll in one of Coursera’s most popular courses AI for Everyone, from DeepLearning.AI. You’ll learn what artificial intelligence is, its impact on our lives, and how it can be applied to your job function. In Vanderbilt University's ChatGPT: Master Free AI Tools to Supercharge Productivity Specialization, you'll learn how to leverage ChatGPT's free AI to excel at project management, writing, data analytics, marketing, social media, and more for work and life.\nArticle sources1.\xa0PricewaterhouseCoopers. “2024 AI Business Predictions, https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html.” Accessed March 10, 2024.2.\xa0McKinsey. “The economic potential of generative AI: The next productivity frontier, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction.” Accessed March 10, 2024.3.\xa0Microsoft. “3 big AI trends to watch in 2024, https://news.microsoft.com/three-big-ai-trends-to-watch-in-2024/.” Accessed March 10, 2024.View all sources\nKeep readingUpdated on Jan 6, 2025Written by:CCoursera StaffEditorial TeamCoursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.\nCoursera FooterLearn Key TechnologiesPythonSQLMicrosoft ExcelPower BITableauR ProgrammingGitDockerAWSTensorFlowEssential SkillsData AnalyticsArtificial IntelligenceCybersecurityDigital MarketingMachine LearningStatistical AnalysisDatabase ManagementWeb DevelopmentFinancial ModelingBusiness AnalysisIndustry SolutionsHealthcare AnalyticsSalesDigital TransformationSupply ChainMarketing AnalyticsHR AnalyticsSocial Media MarketingRisk ManagementSustainabilityE-commerceCareer PathsData ScientistData AnalystMachine Learning EngineerFull Stack DeveloperProject ManagerProduct ManagerData EngineerDigital Marketing SpecialistCybersecurity AnalystCareer Aptitude TestCourseraAboutWhat We OfferLeadershipCareersCatalogCoursera PlusProfessional CertificatesMasterTrack® CertificatesDegreesFor EnterpriseFor GovernmentFor CampusBecome a PartnerSocial ImpactFree CoursesECTS Credit RecommendationsCommunityLearnersPartnersBeta TestersBlogThe Coursera PodcastTech BlogTeaching CenterMorePressInvestorsTermsPrivacyHelpAccessibilityContactArticlesDirectoryAffiliatesModern Slavery StatementDo Not Sell/ShareLearn Anywhere      © 2025 Coursera Inc. All rights reserved."),
 Document(metadata={'title': 'AI Index Report 2024 - Stanford University', 'snippet': 'This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology,\xa0...', 'url': 'https://aiindex.stanford.edu/report/', 'position': 3, 'entity_type': 'OrganicResult'}, page_content='AI Index Report 2024 – Artificial Intelligence Index\nHome\nAbout\nAI Index Report\nResearch\nPeople\nGlobal AI Vibrancy  Tool\nHAI\nTHE AI INDEX REPORTMeasuring trends in AI\nai iNDEX anNUAL rEPORTWelcome to the 2024 AI Index Report\nDOWNLOAD THE FULL REPORT\nDOWNLOAD INDIVIDUAL CHAPTERS\nACCESS THE PUBLIC DATA\nWelcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine.The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.\nTOP TAKEAWAYS\n1. AI beats humans on some tasks, but not on all.AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.\n2. Industry continues to dominate frontier AI research.\xa0In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.\n3. Frontier models get way more expensive.According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.\n4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.\xa0In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.\n5. Robust and standardized evaluations for LLM responsibility are seriously lacking.New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.\n6. Generative AI investment skyrockets.Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.\n7. The data is in: AI makes workers more productive and leads to higher quality work.In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.\n8. Scientific progress accelerates even further, thanks to AI.In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.\n9. The number of AI regulations in the United States sharply increases.The number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.\n10. People across the globe are more cognizant of AI’s potential impact—and more nervous.A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.\nCHAPTERS\nChapter 1: Research and Development\nThis chapter studies trends in AI research and development. It begins by examining trends in AI publications and patents, and then examines trends in notable AI systems and foundation models. It concludes by analyzing AI conference attendance and open-source AI software projects.\nDOWNLOAD CHAPTER 1\n1. Industry continues to dominate frontier AI research.\n2. More foundation models and more open foundation models.\n3. Frontier models get way more expensive.\n4. The United States leads China, the EU, and the U.K. as the leading source of top AI models.\n5. The number of AI patents skyrockets.\n6. China dominates AI patents.\n7. Open-source AI research explodes.\n8. The number of AI publications continues to rise.\nIn 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.\nIn 2023, a total of 149 foundation models were released, more than double the amount released in 2022. Of these newly released models, 65.7% were open-source, compared to only 44.4% in 2022 and 33.3% in 2021.\nAccording to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.\nIn 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.\nFrom 2021 to 2022, AI patent grants worldwide increased sharply by 62.7%. Since 2010, the number of granted AI patents has increased more than 31 times.\nIn 2022, China led global AI patent origins with 61.1%, significantly outpacing the United States, which accounted for 20.9% of AI patent origins. Since 2010, the U.S. share of AI patents has decreased from 54.1%.\nSince 2011, the number of AI-related projects on GitHub has seen a consistent increase, growing from 845 in 2011 to approximately 1.8 million in 2023. Notably, there was a sharp 59.3% rise in the total number of GitHub AI projects in 2023 alone. The total number of stars for AI-related projects on GitHub also significantly increased in 2023, more than tripling from 4.0 million in 2022 to 12.2 million.\nBetween 2010 and 2022, the total number of AI publications nearly tripled, rising from approximately 88,000 in 2010 to more than 240,000 in 2022. The increase over the last year was a modest 1.1%.\nChapter 2: Technical Performance\nThe technical performance section of this year’s AI Index offers a comprehensive overview of AI advancements in 2023. It starts with a high-level overview of AI technical performance, tracing its broad evolution over time. The chapter then examines the current state of a wide range of AI capabilities, including language processing, coding, computer vision (image and video analysis), reasoning, audio processing, autonomous agents, robotics, and reinforcement learning. It also shines a spotlight on notable AI research breakthroughs from the past year, exploring methods for improving LLMs through prompting, optimization, and fine-tuning, and wraps up with an exploration of AI systems’ environmental footprint.\nDOWNLOAD CHAPTER 2\n1. AI beats humans on some tasks, but not on all.\n2. Here comes multimodal AI.\n3. Harder benchmarks emerge.\n4. Better AI means better data which means … even better AI.\n5. Human evaluation is in.\n6. Thanks to LLMs, robots have become more flexible.\n7. More technical research in agentic AI.\n8. Closed LLMs significantly outperform open ones.\nAI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.\nTraditionally AI systems have been limited in scope, with language models excelling in text comprehension but faltering in image processing, and vice versa. However, recent advancements have led to the development of strong multimodal models, such as Google’s Gemini and OpenAI’s GPT-4. These models demonstrate flexibility and are capable of handling images and text and, in some instances, can even process audio.\nAI models have reached performance saturation on established benchmarks such as ImageNet, SQuAD, and SuperGLUE, prompting researchers to develop more challenging ones. In 2023, several challenging new benchmarks emerged, including SWE-bench for coding, HEIM for image generation, MMMU for general reasoning, MoCa for moral reasoning, AgentBench for agent-based behavior, and HaluEval for hallucinations.\nNew AI models such as SegmentAnything and Skoltech are being used to generate specialized data for tasks like image segmentation and 3D reconstruction. Data is vital for AI technical improvements. The use of AI to create more data enhances current capabilities and paves the way for future algorithmic improvements, especially on harder tasks.\nWith generative models producing high-quality text, images, and more, benchmarking has slowly started shifting toward incorporating human evaluations like the Chatbot Arena Leaderboard rather than computerized rankings like ImageNet or SQuAD. Public feeling about AI is becoming an increasingly important consideration in tracking AI progress.\nThe fusion of language modeling with robotics has given rise to more flexible robotic systems like PaLM-E and RT-2. Beyond their improved robotic capabilities, these models can ask questions, which marks a significant step toward robots that can interact more effectively with the real world.\nCreating AI agents, systems capable of autonomous operation in specific environments, has long challenged computer scientists. However, emerging research suggests that the performance of autonomous AI agents is improving. Current agents can now master complex games like Minecraft and effectively tackle real-world tasks, such as online shopping and research assistance.\nOn 10 select AI benchmarks, closed models outperformed open ones, with a median performance advantage of 24.2%. Differences in the performance of closed and open models carry important implications for AI policy debates.\nChapter 3: Responsible AI\nAI is increasingly woven into nearly every facet of our lives. This integration is occurring in sectors such as education, finance, and healthcare, where critical decisions are often based on algorithmic insights. This trend promises to bring many advantages; however, it also introduces potential risks. Consequently, in the past year, there has been a significant focus on the responsible development and deployment of AI systems. The AI community has also become more concerned with assessing the impact of AI systems and mitigating risks for those affected.This chapter explores key trends in responsible AI by examining metrics, research, and benchmarks in four key responsible AI areas: privacy and data governance, transparency and explainability, security and safety, and fairness. Given that 4 billion people are expected to vote globally in 2024, this chapter also features a special section on AI and elections and more broadly explores the potential impact of AI on political processes.\nDOWNLOAD CHAPTER 3\n1. Robust and standardized evaluations for LLM responsibility are seriously lacking.\n2. Political deepfakes are easy to generate and difficult to detect.\n3. Researchers discover more complex vulnerabilities in LLMs.\n4. Risks from AI are a concern for businesses across the globe.\n5.  LLMs can output copyrighted material.\n6. AI developers score low on transparency, with consequences for research.\n7. Extreme AI risks are difficult to analyze.\n8. The number of AI incidents continues to rise.\n9. ChatGPT is politically biased.\nNew research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.\nPolitical deepfakes are already affecting elections across the world, with recent research suggesting that existing AI deepfake detection methods perform with varying levels of accuracy. In addition, new projects like CounterCloud demonstrate how easily AI can create and disseminate fake content.\nPreviously, most efforts to red team AI models focused on testing adversarial prompts that intuitively made sense to humans. This year, researchers found less obvious strategies to get LLMs to exhibit harmful behavior, like asking the models to infinitely repeat random words.\nA global survey on responsible AI highlights that companies’ top AI-related concerns include privacy, security, and reliability. The survey shows that organizations are beginning to take steps to mitigate these risks. However, globally, most companies have so far only mitigated a portion of these risks.\nMultiple researchers have shown that the generative outputs of popular LLMs may contain copyrighted material, such as excerpts from The New York Times or scenes from movies. Whether such output constitutes copyright violations is becoming a central legal question.\nThe newly introduced Foundation Model Transparency Index shows that AI developers lack transparency, especially regarding the disclosure of training data and methodologies. This lack of openness hinders efforts to further understand the robustness and safety of AI systems.\nOver the past year, a substantial debate has emerged among AI scholars and practitioners regarding the focus on immediate model risks, like algorithmic discrimination, versus potential long-term existential threats. It has become challenging to distinguish which claims are scientifically founded and should inform policymaking. This difficulty is compounded by the tangible nature of already present short-term risks in contrast with the theoretical nature of existential threats.\nAccording to the AI Incident Database, which tracks incidents related to the misuse of AI, 123 incidents were reported in 2023, a 32.3% increase from 2022. Since 2013, AI incidents have grown by over twentyfold. A notable example includes AI-generated, sexually explicit deepfakes of Taylor Swift that were widely shared online.\nResearchers find a significant bias in ChatGPT toward Democrats in the United States and the Labour Party in the U.K. This finding raises concerns about the tool’s potential to influence users’ political views, particularly in a year marked by major global elections.\nChapter 4: Economy\nThe integration of AI into the economy raises many compelling questions. Some predict that AI will drive productivity improvements, but the extent of its impact remains uncertain. A major concern is the potential for massive labor displacement—to what degree will jobs be automated versus augmented by AI? Companies are already utilizing AI in various ways across industries, but some regions of the world are witnessing greater investment inflows into this transformative technology. Moreover, investor interest appears to be gravitating toward specific AI subfields like natural language processing and data management.This chapter examines AI-related economic trends using data from Lightcast, LinkedIn, Quid, McKinsey, Stack Overflow, and the International Federation of Robotics (IFR). It begins by analyzing AI-related occupations, covering labor demand, hiring trends, skill penetration, and talent availability. The chapter then explores corporate investment in AI, introducing a new section focused specifically on generative AI. It further examines corporate adoption of AI, assessing current usage and how developers adopt these technologies. Finally, it assesses AI’s current and projected economic impact and robot installations across various sectors.\nDOWNLOAD CHAPTER 4\n1. Generative AI investment skyrockets.\n2. Already a leader, the United States pulls even further ahead in AI private investment.\n3. Fewer AI jobs, in the United States and across the globe.\n4. AI decreases costs and increases revenues.\n5. Total AI private investment declines again, while the number of newly funded AI companies increases.\n6. AI organizational adoption ticks up.\n7. China dominates industrial robotics.\n8. Greater diversity in robot installations.\n9. The data is in: AI makes workers more productive and leads to higher quality work.\n10. Fortune 500 companies start talking a lot about AI, especially generative AI.\nDespite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.\nIn 2023, the United States saw AI investments reach $67.2 billion, nearly 8.7 times more than China, the next highest investor. While private AI investment in China and the European Union, including the United Kingdom, declined by 44.2% and 14.1%, respectively, since 2022, the United States experienced a notable increase of 22.1% in the same time frame.\nIn 2022, AI-related positions made up 2.0% of all job postings in America, a figure that decreased to 1.6% in 2023. This decline in AI job listings is attributed to fewer postings from leading AI firms and a reduced proportion of tech roles within these companies.\nA new McKinsey survey reveals that 42% of surveyed organizations report cost reductions from implementing AI (including generative AI), and 59% report revenue increases. Compared to the previous year, there was a 10 percentage point increase in respondents reporting decreased costs, suggesting AI is driving significant business efficiency gains.\nGlobal private AI investment has fallen for the second year in a row, though less than the sharp decrease from 2021 to 2022. The count of newly funded AI companies spiked to 1,812, up 40.6% from the previous year.\nA 2023 McKinsey report reveals that 55% of organizations now use AI (including generative AI) in at least one business unit or function, up from 50% in 2022 and 20% in 2017.\nSince surpassing Japan in 2013 as the leading installer of industrial robots, China has significantly widened the gap with the nearest competitor nation. In 2013, China’s installations accounted for 20.8% of the global total, a share that rose to 52.4% by 2022.\nIn 2017, collaborative robots represented a mere 2.8% of all new industrial robot installations, a figure that climbed to 9.9% by 2022. Similarly, 2022 saw a rise in service robot installations across all application categories, except for medical robotics. This trend indicates not just an overall increase in robot installations but also a growing emphasis on deploying robots for human-facing roles.\nIn 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.\nIn 2023, AI was mentioned in 394 earnings calls (nearly 80% of all Fortune 500 companies), a notable increase from 266 mentions in 2022. Since 2018, mentions of AI in Fortune 500 earnings calls have nearly doubled. The most frequently cited theme, appearing in 19.7% of all earnings calls, was generative AI.\nChapter 5: Science and Medicine\nThis year’s AI Index introduces a new chapter on AI in science and medicine in recognition of AI’s growing role in scientific and medical discovery. It explores 2023’s standout AI-facilitated scientific achievements, including advanced weather forecasting systems like GraphCast and improved material discovery algorithms like GNoME. The chapter also examines medical AI system performance, important 2023 AI-driven medical innovations like SynthSR and ImmunoSEIRA, and trends in the approval of FDA AI-related medical devices.\nDOWNLOAD CHAPTER 5\n1. Scientific progress accelerates even further, thanks to AI.\n2. AI helps medicine take significant strides forward.\n3. Highly knowledgeable medical AI has arrived.\n4. The FDA approves more and more AI-related medical devices.\nIn 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.\nIn 2023, several significant medical systems were launched, including EVEscape, which enhances pandemic prediction, and AlphaMissence, which assists in AI-driven mutation classification. AI is increasingly being utilized to propel medical advancements.\nOver the past few years, AI systems have shown remarkable improvement on the MedQA benchmark, a key test for assessing AI’s clinical knowledge. The standout model of 2023, GPT-4 Medprompt, reached an accuracy rate of 90.2%, marking a 22.6 percentage point increase from the highest score in 2022. Since the benchmark’s introduction in 2019, AI performance on MedQA has nearly tripled.\nIn 2022, the FDA approved 139 AI-related medical devices, a 12.1% increase from 2021. Since 2012, the number of FDA-approved AI-related medical devices has increased by more than 45-fold. AI is increasingly being used for real-world medical purposes.\nChapter 6: Education\nThis chapter examines trends in AI and computer science (CS) education, focusing on who is learning, where they are learning, and how these trends have evolved over time. Amid growing concerns about AI’s impact on education, it also investigates the use of new AI tools like ChatGPT by teachers and students.The analysis begins with an overview of the state of postsecondary CS and AI education in the United States and Canada, based on the Computing Research Association’s annual Taulbee Survey. It then reviews data from Informatics Europe regarding CS education in Europe. This year introduces a new section with data from Studyportals on the global count of AI-related English-language study programs.\xa0The chapter wraps up with insights into K–12 CS education in the United States from Code.org and findings from the Walton Foundation survey on ChatGPT’s use in schools.\nDOWNLOAD CHAPTER 6\n1. The number of American and Canadian CS bachelor’s graduates continues to rise, new CS master’s graduates stay relatively flat, and PhD graduates modestly grow.\n2. The migration of AI PhDs to industry continues at an accelerating pace.\n3. Less transition of academic talent from industry to academia.\n4. CS education in the United States and Canada becomes less international.\n5. More American high school students take CS courses, but access problems remain.\n6. AI-related degree programs are on the rise internationally.\n7. The United Kingdom and Germany lead in European informatics, CS, CE, and IT graduate production.\nWhile the number of new American and Canadian bachelor’s graduates has consistently risen for more than a decade, the number of students opting for graduate education in CS has flattened. Since 2018, the number of CS master’s and PhD graduates has slightly declined.\nIn 2011, roughly equal percentages of new AI PhDs took jobs in industry (40.9%) and academia (41.6%). However, by 2022, a significantly larger proportion (70.7%) joined industry after graduation compared to those entering academia (20.0%). Over the past year alone, the share of industry-bound AI PhDs has risen by 5.3 percentage points, indicating an intensifying brain drain from universities into industry.\nIn 2019, 13% of new AI faculty in the United States and Canada were from industry. By 2021, this figure had declined to 11%, and in 2022, it further dropped to 7%. This trend indicates a progressively lower migration of high-level AI talent from industry into academia.\nProportionally fewer international CS bachelor’s, master’s, and PhDs graduated in 2022 than in 2021. The drop in international students in the master’s category was especially pronounced.\nIn 2022, 201,000 AP CS exams were administered. Since 2007, the number of students taking these exams has increased more than tenfold. However, recent evidence indicates that students in larger high schools and those in suburban areas are more likely to have access to CS courses.\nThe number of English-language, AI-related postsecondary degree programs has tripled since 2017, showing a steady annual increase over the past five years. Universities worldwide are offering more AI-focused degree programs.\nThe United Kingdom and Germany lead Europe in producing the highest number of new informatics, CS, CE, and information bachelor’s, master’s, and PhD graduates. On a per capita basis, Finland leads in the production of both bachelor’s and PhD graduates, while Ireland leads in the production of master’s graduates.\nChapter 7: Policy and Governance\nAI’s increasing capabilities have captured policymakers’ attention. Over the past year, several nations and political bodies, such as the United States and the European Union, have enacted significant AI-related policies. The proliferation of these policies reflect policymakers’ growing awareness of the need to regulate AI and improve their respective countries’ ability to capitalize on its transformative potential.This chapter begins examining global AI governance starting with a timeline of significant AI policymaking events in 2023. It then analyzes global and U.S. AI legislative efforts, studies AI legislative mentions, and explores how lawmakers across the globe perceive and discuss AI. Next, the chapter profiles national AI strategies and regulatory efforts in the United States and the European Union. Finally, it concludes with a study of public investment in AI within the United States.\nDOWNLOAD CHAPTER 7\n1. The number of AI regulations in the United States sharply increases.\n2. The United States and the European Union advance landmark AI policy action.\n3. AI captures U.S. policymaker attention.\n4. Policymakers across the globe cannot stop talking about AI.\n5. More regulatory agencies turn their attention toward AI.\nThe number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.\nIn 2023, policymakers on both sides of the Atlantic put forth substantial AI regulatory proposals. The European Union reached a deal on the terms of the AI Act, a landmark piece of legislation enacted in 2024. Meanwhile, President Biden signed an Executive Order on AI, the most notable AI policy initiative in the United States that year.\nThe year 2023 witnessed a remarkable increase in AI-related legislation at the federal level, with 181 bills proposed, more than double the 88 proposed in 2022.\nMentions of AI in legislative proceedings across the globe have nearly doubled, rising from 1,247 in 2022 to 2,175 in 2023. AI was mentioned in the legislative proceedings of 49 countries in 2023. Moreover, at least one country from every continent discussed AI in 2023, underscoring the truly global reach of AI policy discourse.\nThe number of U.S. regulatory agencies issuing AI regulations increased to 21 in 2023 from 17 in 2022, indicating a growing concern over AI regulation among a broader array of American regulatory bodies. Some of the new regulatory agencies that enacted AI-related regulations for the first time in 2023 include the Department of Transportation, the Department of Energy, and the Occupational Safety and Health Administration.\nChapter 8: Diversity\nThe demographics of AI developers often differ from those of users. For instance, a considerable number of prominent AI companies and the datasets utilized for model training originate from Western nations, thereby reflecting Western perspectives. The lack of diversity can perpetuate or even exacerbate societal inequalities and biases.This chapter delves into diversity trends in AI. The chapter begins by drawing on data from the Computing Research Association (CRA) to provide insights into the state of diversity in American and Canadian computer science (CS) departments. A notable addition to this year’s analysis is data sourced from Informatics Europe, which sheds light on diversity trends within European CS education. Next, the chapter examines participation rates at the Women in Machine Learning (WiML) workshop held annually at NeurIPS. Finally, the chapter analyzes data from Code.org, offering insights into the current state of diversity in secondary CS education across the United States.\xa0The AI Index is dedicated to enhancing the coverage of data shared in this chapter. Demographic data regarding AI trends, particularly in areas such as sexual orientation, remains scarce. The AI Index urges other stakeholders in the AI domain to intensify their endeavors to track diversity trends associated with AI and hopes to comprehensively cover such trends in future reports.\nDOWNLOAD CHAPTER 8\n1. U.S. and Canadian bachelor’s, master’s, and PhD CS students continue to grow more ethnically diverse.\n2. Substantial gender gaps persist in European informatics, CS, CE, and IT graduates at all educational levels.\n3. U.S. K–12 CS education is growing more diverse, reflecting changes in both gender and ethnic representation.\nWhile white students continue to be the most represented ethnicity among new resident graduates at all three levels, the representation from other ethnic groups, such as Asian, Hispanic, and Black or African American students, continues to grow. For instance, since 2011, the proportion of Asian CS bachelor’s degree graduates has increased by 19.8 percentage points, and the proportion of Hispanic CS bachelor’s degree graduates has grown by 5.2 percentage points.\nEvery surveyed European country reported more male than female graduates in bachelor’s, master’s, and PhD programs for informatics, CS, CE, and IT. While the gender gaps have narrowed in most countries over the last decade, the rate of this narrowing has been slow.\nThe proportion of AP CS exams taken by female students rose from 16.8% in 2007 to 30.5% in 2022. Similarly, the participation of Asian, Hispanic/Latino/Latina, and Black/African American students in AP CS has consistently increased year over year.\nChapter 9: Public Opinion\nAs AI becomes increasingly ubiquitous, it is important to understand how public perceptions regarding the technology evolve. Understanding this public opinion is vital in better anticipating AI’s societal impacts and how the integration of the technology may differ across countries and demographic groups.This chapter examines public opinion on AI through global, national, demographic, and ethnic perspectives. It draws upon several data sources: longitudinal survey data from Ipsos profiling global AI attitudes over time, survey data from the University of Toronto exploring public perception of ChatGPT, and data from Pew examining American attitudes regarding AI. The chapter concludes by analyzing mentions of significant AI models on Twitter, using data from Quid.\nDOWNLOAD CHAPTER 9\n1. People across the globe are more cognizant of AI’s potential impact—and more nervous.\n2. AI sentiment in Western nations continues to be low, but is slowly improving.\n3. The public is pessimistic about AI’s economic impact.\n4. Demographic differences emerge regarding AI optimism.\n5. ChatGPT is widely known and widely used.\nA survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.\nIn 2022, several developed Western nations, including Germany, the Netherlands, Australia, Belgium, Canada, and the United States, were among the least positive about AI products and services. Since then, each of these countries has seen a rise in the proportion of respondents acknowledging the benefits of AI, with the Netherlands experiencing the most significant shift.\nIn an Ipsos survey, only 37% of respondents feel AI will improve their job. Only 34% anticipate AI will boost the economy, and 32% believe it will enhance the job market.\nSignificant demographic differences exist in perceptions of AI’s potential to enhance livelihoods, with younger generations generally more optimistic. For instance, 59% of Gen Z respondents believe AI will improve entertainment options, versus only 40% of baby boomers. Additionally, individuals with higher incomes and education levels are more optimistic about AI’s positive impacts on entertainment, health, and the economy than their lower-income and less-educated counterparts.\nAn international survey from the University of Toronto suggests that 63% of respondents are aware of ChatGPT. Of those aware, around half report using ChatGPT at least once weekly.\nPast Reports\n2023\n2022\n2021\n2019\n2018\n2017\nArtificial Intelligence Index\nStanford Institute for Human-Centered Artificial Intelligence\nCordura Hall\n201 Panama Street\nStanford University\nStanford, CA 94305\nSUBSCRIBE TO THE HAI NEWSLETTER\nEmail\nTwitter\nLinkedIn\nStanford Home\nMaps & Directions\nSearch Stanford\nEmergency Info\nTerms of Use\nPrivacy\nCopyright\nTrademarks\nNon-Discrimination\nAccessibility\n© Stanford University. Stanford, California 94305.')]
```

### Example of retrieval mode with an array of URLs

```python
retrieverMode2 = NimbleSearchRetriever(links=["example.com"])
retrieverMode2.invoke(input="")
```

```output
[Document(metadata={'title': None, 'snippet': None, 'url': 'https://example.com', 'position': None, 'entity_type': 'HtmlContent'}, page_content='<!doctype html>\n<html>\n<head>\n    <title>Example Domain</title>\n\n    <meta charset="utf-8" />\n    <meta http-equiv="Content-type" content="text/html; charset=utf-8" />\n    <meta name="viewport" content="width=device-width, initial-scale=1" />\n    <style type="text/css">\n    body {\n        background-color: #f0f0f2;\n        margin: 0;\n        padding: 0;\n        font-family: -apple-system, system-ui, BlinkMacSystemFont, "Segoe UI", "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif;\n        \n    }\n    div {\n        width: 600px;\n        margin: 5em auto;\n        padding: 2em;\n        background-color: #fdfdff;\n        border-radius: 0.5em;\n        box-shadow: 2px 3px 7px 2px rgba(0,0,0,0.02);\n    }\n    a:link, a:visited {\n        color: #38488f;\n        text-decoration: none;\n    }\n    @media (max-width: 700px) {\n        div {\n            margin: 0 auto;\n            width: auto;\n        }\n    }\n    </style>    \n</head>\n\n<body>\n<div>\n    <h1>Example Domain</h1>\n    <p>This domain is for use in illustrative examples in documents. You may use this\n    domain in literature without prior coordination or asking for permission.</p>\n    <p><a href="https://www.iana.org/domains/example">More information...</a></p>\n</div>\n</body>\n</html>\n')]
```

## Use within a chain

Like other retrievers, NimbleSearchRetriever can be incorporated into LLM applications via [chains](/oss/how-to/sequence/).

We will need an LLM or chat model:

```python
from langchain_openai import ChatOpenAI

os.environ["OPENAI_API_KEY"] = getpass.getpass()
llm = ChatOpenAI(model="gpt-4o", temperature=0)
```

```python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

prompt = ChatPromptTemplate.from_template(
    """Answer the question based only on the context provided.

Context: {context}

Question: {question}"""
)


def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)


chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)
```

```python
chain.invoke("Who is the CEO of Nimbleway?")
```

```output
'The CEO of Nimble Way is Uriel Knorovich.'
```
