[ad_1]
What the researchers have to say about the AI worm
The research team, comprising Ben Nassi of Cornell Tech, Stav Cohen of the Israel Institute of Technology, and Ron Bitton of Intuit, named the worm after the original Morris worm. This notorious computer worm unleashed in 1988. Unlike its predecessor, Morris II targets AI apps, specifically those using large language models (LLMs) like Gemini Pro, ChatGPT 4.0, and LLaVA, to generate text and images.
The worm uses a technique called “adversarial self-replicating prompts.” These prompts, when fed into the LLM, trick the model into replicating them and initiating malicious actions. This includes:The researchers described: “The study demonstrates that attackers can insert such prompts into inputs that, when processed by GenAI models, prompt the model to replicate the input as output (replication) and engage in malicious activities (payload). Additionally, these inputs compel the agent to deliver them (propagate) to new agents by exploiting the connectivity within the GenAI ecosystem. We demonstrate the application of Morris II against GenAI-powered email assistants in two use cases (spamming and exfiltrating personal data), under two settings (black-box and white-box accesses), using two types of input data (text and images).”
The researchers successfully demonstrated the worm’s capabilities in two scenarios:
- Spamming: Morris II generated and sent spam emails through the compromised email assistant.
- Data Exfiltration: The worm extracted sensitive personal data from the infected system.
The researchers said that AI worms like this can help cyber criminals to extract confidential information, including credit card details, social security numbers and more. They also uploaded a video on YouTube to explain how the worm works:
ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications
What AI companies said about the worm
In a statement, an OpenAI spokesperson said: “They appear to have found a way to exploit prompt-injection type vulnerabilities by relying on user input that hasn’t been checked or filtered.”
The spokesperson said that the company is making its systems more resilient and added that developers should use methods that ensure they are not working with harmful input.
Meanwhile, Google refused to comment about the research.
[ad_2]
Source link