[ad_1]
The Artificial intelligence (AI) revolution has taken over the tech industry since 2022. This tech is not only being used by major companies to offer improved experiences but is also being adopted by cybercriminals as well. According to a report by Reuters, a Canadian cybersecurity official has warned about how AI can be used by hackers and propagandists for their benefit. Canadian Centre for Cyber Security Head Sami Khoury has said that AI can be used to create malicious software, draft convincing phishing emails and spread disinformation online.
How cybercriminals are misusing AI
He noted that the agency has seen AI being used “in phishing emails, or crafting emails in a more focused way, in malicious code (and) in misinformation and disinformation.” Khoury didn’t offer any details or evidence about how cybercriminals are misusing AI. However, this assertion adds to the concern that cybercriminals have already started using this emerging technology.
Khoury noted that using AI to draft malicious code is still in its nascent stages. “There’s still a way to go because it takes a lot to write a good exploit”. Khoury is concerned with the speed at which AI models are evolving. He notes that at this speed, it will be difficult to keep a check on the malicious potential of these models before they are released for common users.
Cybersecurity watchdogs on risks of AI
Cybersecurity watchdogs from several countries have already published reports warning about the risks of AI. Cyber officials have warned specifically about large language models (LLMs). These fast-advancing language processing programs scrape through huge volumes of text to generate human-like dialogue, documents and more.
In March, Europol published a report about OpenAI’s ChatGPT can be misused by cybercriminals. The European police organisation said that this generative AI model made it possible “to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”
Later, UK’s National Cyber Security Centre updated a blog post highlighting that criminals “might use LLMs to help with cyber attacks beyond their current capabilities.”
How cybercriminals are misusing AI
He noted that the agency has seen AI being used “in phishing emails, or crafting emails in a more focused way, in malicious code (and) in misinformation and disinformation.” Khoury didn’t offer any details or evidence about how cybercriminals are misusing AI. However, this assertion adds to the concern that cybercriminals have already started using this emerging technology.
Khoury noted that using AI to draft malicious code is still in its nascent stages. “There’s still a way to go because it takes a lot to write a good exploit”. Khoury is concerned with the speed at which AI models are evolving. He notes that at this speed, it will be difficult to keep a check on the malicious potential of these models before they are released for common users.
Cybersecurity watchdogs on risks of AI
Cybersecurity watchdogs from several countries have already published reports warning about the risks of AI. Cyber officials have warned specifically about large language models (LLMs). These fast-advancing language processing programs scrape through huge volumes of text to generate human-like dialogue, documents and more.
In March, Europol published a report about OpenAI’s ChatGPT can be misused by cybercriminals. The European police organisation said that this generative AI model made it possible “to impersonate an organisation or individual in a highly realistic manner even with only a basic grasp of the English language.”
Later, UK’s National Cyber Security Centre updated a blog post highlighting that criminals “might use LLMs to help with cyber attacks beyond their current capabilities.”
Cybersecurity researchers have also demonstrated various potentially malicious use cases. Some researchers also mentioned seeing suspected AI-generated content. Last week, a former hacker discovered an LLM trained on malicious material. He was also able to use this model to draft a mail to trick users into making a cash transfer.
The LLM came up with a three-paragraph email that asked its target for help with an urgent invoice.
“I understand this may be short notice,” the LLM said, “but this payment is incredibly important and needs to be done in the next 24 hours.”
[ad_2]
Source link