The cybersecurity landscape is undergoing a transformation thanks to artificial intelligence, with both the industry and its adversaries jumping on the bandwagon to enhance the tools of their trades.
This is according to CYBER1 Solutions GM for sales Europe Hilbert Long. “Concurrently, the democratisation of AI is now in full swing, thanks to the emergence of cutting-edge generative AI tools such as OpenAI’s ChatGPT and Dall-E. These tools empower everyday users with the capabilities of artificial intelligence.”
In just the initial five days after its November 2022 launch, ChatGPT’s platform saw over a million users eager to put its AI prowess to the test, he said. People are excited to explore the potential of these generative AI tools across various domains, including coding, essay writing, artistic creation, blueprint design, package artwork, virtual world and avatar creation in the metaverse – and even troubleshooting production errors.
They are also engaged in an iterative process of refining their prompts and instructions to extract increasingly superior outcomes.
“However, while the positive applications of generative AI are incredibly promising, there is also the sobering reality of potential misuse and harm,” he said. “As users delved into this innovative tool, some discovered its capacity to generate malicious software, craft phishing e-mails and propagate propaganda. These same tools could also produce false information and push viewpoints that are linked to misinformation campaigns.”
Generative AI – no planning or management
With the growing popularity and widespread adoption of generative AI, the question of who bears responsibility for addressing these associated risks is becoming a widespread concern, Long said In fact, over 1,100 signatories, including prominent figures like billionaire Elon Musk, Apple co-founder Steve Wozniak and Tristan Harris from the Centre for Humane Technology, recently posted an open letter that calls for an immediate pause, lasting at least six months, on the training of AI systems more powerful than GPT-4 by all AI labs.
The letter argues that there is a notable absence of the necessary planning and management. It suggests that instead of proper planning and management, “AI labs” have become embroiled in a reckless race to develop and deploy increasingly powerful digital intelligences that no one, not even their creators, can fully comprehend, predict or reliably control.
Addressing the associated risks, the letter advocates for the development of powerful AI systems only when there is confidence in their positive impact and manageable risks.
Nevertheless, while regulation is considered a crucial step, there’s no guarantee that rapid and bold regulatory actions will effectively safeguard AI. Comparable situations, such as the drug trade or cryptocurrencies, demonstrate that legislation alone may not be sufficient to halt illicit activities.
“Furthermore, while many within the industry are working on regulations, malicious actors remain unconcerned about or unbound by these regulations,” Long said.
“They seize every opportunity to exploit the potential of AI for malicious purposes. This underscores the fact that AI is not only altering the landscape of the cyber arms race but also elevating it to a nuclear level of risk and competition.”
AI-enhanced malware
To begin with, malefactors armed with AI now have the ability to automate their malicious tools and activities, including identity theft, phishing, data exfiltration, fraud, and more, at a pace and precision beyond human capabilities.
“AI-enabled attacks happen when bad actors leverage AI as a tool to aid in the development of malware or to execute cyberattacks. These types of attacks have become increasingly popular and include activities such as the creation of malware, data poisoning and reverse engineering,” he said.
In addition, advanced conversational chatbots like ChatGPT, powered by Large Language Models (LLMs) for Natural Language Understanding (NLU), are significantly amplifying the potential for automating and enhancing the effectiveness of AI-facilitated malware attacks.
The imitation game
As an illustration, Long says an attacker may employ a chatbot to compose more convincing phishing messages that do not display the typical indicators of deception, such as grammar, syntax or spelling errors that are easily detected.
“In the context of ChatGPT specifically, its ability to generate code underscores the growing menace posed by AI-driven malware. In April this year, a security researcher at Forcepoint unveiled a zero-day virus with untraceable data extraction, solely relying on ChatGPT prompts.”
He stresses that although ChatGPT has demonstrated the capability to generate functions, it currently lacks robust mechanisms for error checking and prevention in production-style environments. “Right now, ChatGPT lacks the adversarial reasoning required by malware developers, such as considering countermeasures adversaries might employ to thwart their actions while advancing their own objectives. However, this could change overnight.”
This is why Long says companies must remain vigilant regarding the threats stemming from AI-driven hacking tools and implement the necessary measures to fortify their networks against these threats.
Another major concern is the rapid advancement of deepfake technology, which is getting better and better at mimicking reality. Almost anyone can now produce counterfeit images, videos, audio and text that appear deceptively genuine.
Updating protocols
“Given the formidable capabilities of these AI-fuelled tools, it is crucial for entities across all sectors to arm themselves against these dangers,” Long said. “This highlights the urgent need for businesses to update their security protocols to stay one step ahead of evolving threats.”
To do this, he said organisations must be made aware of the perils posed by AI hacking tools and take measures to safeguard their networks against these emerging threats. One way to do this is by leveraging AI tools to augment their security strategies.
“This means choosing a cybersecurity partner that possesses the necessary expertise to strike a balance between these intelligent tools, streamline processes, and with the invaluable human experience and knowledge that have proved key to mitigating security incidents and expediting the identification and handling of threats,” Long concluded.
CYBER1 Solutions is a cybersecurity specialist operating in Emea
Our solutions deliver information security; IT risk management; fraud detection; governance and compliance; as well as a full range of managed services. We also provide bespoke security services across the spectrum, with a portfolio that ranges from the formulation of our customers’ security strategies to the daily operation of end-point security solutions. To do this, we partner with world-leading security vendors to deliver cutting-edge technologies augmented by our wide range of professional services.
Our services enable organisations in every sector to prevent attacks by providing the visibility into vulnerabilities they need to rapidly detect compromises, respond to breaches, and stop attacks before they become an issue.
- Connect with Hilbert Long on LinkedIn
- Read more articles by CYBER1 Solutions on TechCentral
- This promoted content was paid for by the party concerned