Staying abreast of a rapidly evolving industry like artificial intelligence requires a substantial effort. It’s clear that the competitive landscape within AI is incredibly intense, and this intensity is continually growing, with players across several sectors ramping up investment in AI.
With increased investment in AI development, including tools such as OpenAI, and Meta’s new LLaMa 2 generative AI solution, things are heating up, opening the floodgates to questions around security, ethics and long-term adoption.
There is a great deal of concern around the ethical and safe use of these tools. In fact, more than 1 100 signatories, including Elon Musk, Steve Wozniak and Tristan Harris of the Centre for Humane Technology, signed an open letter that was posted online a few months ago, calling on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”.
Managing the risks
With a focus on the risks of this rapidly evolving technology, the letter argues that there is a “level of planning and management that is not happening,” and that the planning and management that should be happening, has instead been replaced with unnamed “AI labs” becoming “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control”.
Although regulation is an important step, there is also no guarantee that even if regulators could move very, very quickly to put bold steps in place to safeguard AI, there’s no guarantee it is actually going to work. Think about the drug trade or crypto – there’s legislation, but it’s not stopping nefarious activity from taking place.
Similarly, while many in the industry try to build regulations, attackers are neither concerned with, nor bound by, regulations, and are going to take every opportunity to harness the power of AI for malicious purposes, leaving no doubt that AI is changing the cyber arms race –and taking it nuclear.
AI-enabled malicious tools
For starters, AI-enabled threat actors have the potential to automate their malicious tools and activities such as identify theft, phishing, data theft and fraud more rapidly and with greater accuracy than human actors could manage on their own.
Companies need to be aware of the threats that are enabled by AI hacking tools and take the appropriate steps to make sure their networks are secured against them.
With the advent of AI-driven hacking tools, malicious agents now possess potent resources capable of automating their attack strategies. Two recent instances are XXXGPT and Wolf GPT – two tools that use generative models to generate malware, making them particularly perilous for companies to defend against.
XXXGPT employs a large language model to engender malware based on its training data. This capability facilitates the creation of malware that is capable of slipping through the security nets. Moreover, the tool integrates an obfuscation component that masks the code originating from the model, intensifying the complexity of prevention and identification.
On a separate trajectory, Wolf GPT represents another dangerous AI-powered hacking instrument, with clear end objectives, one of which is cloaking attackers with a layer of anonymity within specific attack vectors. This scourge excels at generating malware with a high degree of realism by capitalising on extensive datasets of pre-existing malicious code. Furthermore, it empowers attackers to orchestrate sophisticated phishing campaigns.
Aping reality
There’s also the scourge of deepfake technology that is rapidly getting better at mimicking reality, with almost anyone these days having the ability to create fake pictures, videos, audio and even text that appears so convincingly real, it will defy all but the closest scrutiny. Unfortunately, these deepfakes can not only be used for social engineering, they can also be used for extortion too.
Given the formidable capabilities of these tools, it becomes critical for entities in every sector to brace themselves against their potential impact and has stressed the need for businesses to update their security practices in order to stay ahead of threats. Companies need to be aware of the dangers posed by AI hacking tools and take steps to ensure their networks are protected against them.
Advancing defences
One way of doing this is to ensure that cybersecurity companies are using AI tools to advance their defences too, which is why it is really important to bring the right partner on board. A good partner will know how to achieve the delicate balance between these intelligent tools, processes, and the human experience and knowledge that has proven pivotal in mitigating security incidents and expediting the identification and handling of threats.
For instance, security professionals have the ability to devise bespoke protocols that empower AI to sift through incidents lacking substantial security implications.
Through the automation of rule-based scenarios by AI, human analysts are freed from onerous and mundane tasks and can instead focus their attention on value-added tasks that need human insight and acumen. The right partner can help businesses build context around today’s ever-changing threat landscape.
Similarly, most entities these days lack the essential security expertise and resources needed to maintain continuous round-the-clock monitoring in the form of a security operations centre that is needed to defend against modern threats. A partner will have AI infused into their security operations, leveraging the scalability of cloud-based infrastructure, enlisting external security professionals and enforcing the right policies and procedures.
About Arctic Wolf
Arctic Wolf is the market leader in security operations. Using the cloud-native Arctic Wolf Platform, we help companies end cyber risk by providing security operations as a concierge service. Highly trained triage and concierge security experts work as an extension of internal teams to provide 24×7 monitoring, detection and response, ongoing risk management and security awareness training to give organisations the protection, resilience and guidance they need to defend against cyber threats.
- The author, Dan Schiappa, is chief product officer at Arctic Wolf
- Read more articles by Arctic Wolf on TechCentral
- This promoted content was paid for by the party concerned