Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
vladbogo 
posted an update Feb 24
Post
"LLM Agents can Autonomously Hack Websites" is a new paper that investigates the capacity of LLMs to autonomously execute cybersecurity attacks on websites, such as SQL injections without human guidance.

Key points:
* It uses a LLM integrated with Playwright, a headless web browser, enabling automated web interactions through function calling.
* It gives access to the LLM to 7 web hacking documents and planning capabilities through specific prompting, without disclosing the exact methods to prevent misuse.

GPT-4 achieves a 73.3% success rate on the tested vulnerabilities, emphasizing the potential cybersecurity risks posed by advanced LLMs. Other open models cannot yet perform these types of attacks (results in screenshot).

Congrats to the authors for their work!

Paper: LLM Agents can Autonomously Hack Websites (2402.06664)

Closed source models are increasingly going to take longer and longer to make do something considered "bad" so they will fall farther and farther behind of people try to use them for defence against the gradually rising use of open source uncensored models in attacks until most blue teams alsp move to decentralized open source models to build better defensive tool chains.

Generally speaking this paper is great from the perspective of approaches to further offensive web security automation capabilities with ML but they focus on incorrect things like trying to suggest organization try even harder to stifle the censorship of their platform to be safe rather than teach people how to use ML / LLMs mitigate risk of rouge bad sequences of langchains like they do with chemistry and genetics.

·

Agree! I don't think it's at all feasible to handle these types of problems/attacks at a provider level. So, as you said, I think that new open-source defensive tool chains will emerge. However, I think the paper makes a good step towards showcasing some of the current capabilities and can enable further research both for finding more complex attacks and also mitigations.

In this post