Benefit or threat? The impacts of generative AI on cybersecurity

Author: Sunil Sharma
Businessman analyzing data on a tablet and laptop

At a glance

As cyber threats increase in sophistication and diversify, does generative AI serve as a powerful tool for organisations or does it present new and bigger risks? Here we discuss the challenges and benefits of embedding AI into cybersecurity strategies. 
As cyber threats increase in sophistication and diversify, does generative AI serve as a powerful tool for organisations or does it present new and bigger risks? Here we discuss the challenges and benefits of embedding AI into cybersecurity strategies.

The dual role of generative AI in cybersecurity

Generative AI is an emerging field within AI that uses advanced algorithms to learn patterns from input data in order to generate new, original content including text, visuals, audio, and complex designs. It can analyse and synthesise large volumes of data, especially unstructured information, from various sources with ease, providing valuable insights and enhancing decision-making processes. Its adoption is gradually increasing as organisations seek to strengthen the management and operation of their digital assets, join the race for innovation or follow the lead of industry peers.

In the context of cybersecurity, organisations can utilise generative AI to fortify defences, accelerate configuration management, automate security measures, and enhance threat detection and remediation. AI helps in detecting abnormal patterns and predicting potential cyber threats, thus enabling preventative measures. It can also assist in trawling the dark web to identify potential threats before they materialise, providing a proactive approach to cybersecurity. Possibly the greatest benefit of generative AI to cybersecurity is enhancing speed and accuracy of capabilities and countermeasures when compared to traditional methods and practices. Generative AI can bolster threat intelligence and threat detection, which involves anticipating breaches before they occur in an automated manner, or promptly responding to detected events and incidents in real-time before they are able to develop and escalate. 

Conversely, the same AI technologies are accessible to threat actors, including nation states and cyber criminals, who can use them to understand and exploit vulnerabilities in organisations. Generative AI can be used to create more sophisticated scams, such as impersonating individuals through voice or video using deepfakes, leading to potential monetary or espionage-related threats. It can be used to easily create malware, improve social engineering tactics, target asset vulnerabilities and launch stealthy attacks that elude traditional detection systems. This duality highlights the need for careful consideration and regulation of AI technologies to maximise their benefits while mitigating adverse risks.

The AI Cybersecurity Dimensions framework

Organisations must have a comprehensive cybersecurity strategy to effectively govern and manage risks posed to digital assets and environments. 

The AI Cybersecurity Dimensions framework details key considerations when integrating generative AI into cybersecurity capabilities, highlighting full awareness of its dual nature, when used for offensive and defensive applications. The framework reinforces regulations and urges organisations, such as those that own and operate critical infrastructure, to strive for a higher level of maturity within their cyber governance, and to strengthen understanding of using generative AI-driven solutions when compared to conventional strategies and methods.  

Here are some key considerations that the AI Cybersecurity Dimensions framework say organisations should follow to protect themselves from AI-driven cyberattacks:

1. Enhance threat detection and response

  • Utilise AI and machine learning (ML) to improve the speed and accuracy of identifying and responding to potential threats in real-time, including for offensive threat hunting purposes.
  • Implement AI-driven intrusion detection systems (IDS) and autonomous prevention measures to detect and mitigate attacks in real-time.

2. Automate security measures

  • Use AI to automate routine security tasks, reducing human error and increasing efficiency.
  • Employ AI algorithms for malware detection, spam detection, and vulnerability management to maintain robust security protocols.

3. Conduct regular cybersecurity training

  • Ensure all employees, from the board to individual staff members, are trained on cybersecurity best practices and the potential risks associated with AI.
  • Promote awareness of phishing attacks, social engineering tactics, and other common cyber threats.

4. Simulate cyberattack scenarios

  • Use AI to simulate potential cyberattacks and enhance incident response plans.
  • Conduct regular simulated exercises to prepare cybersecurity professionals and employees for real-world attack scenarios and improve their response strategies.

5. Strengthen cyber governance

  • Develop and enforce robust cyber governance policies that incorporate the use and adoption of AI, ensuring compliance with regulations and industry standards.
  • Foster public-private partnerships and international cooperation to enhance enforcement and share threat intelligence.

These considerations provide a comprehensive approach to safeguarding against AI-driven cyber threats, emphasising the importance of proactive measures, continuous training, and robust governance.

Cybersecurity extends far beyond IT and security professionals, including from the board to individual employees - everyone has a stake and responsibility in protecting assets, infrastructure and information. The AI Cybersecurity Dimensions framework is relevant to academics, policymakers and industry professionals, as it reinforces the shared obligation to combat evolving threats and keep the wider digital ecosystem secure.  

How to take an integrated approach to generative AI

Leaders need to be purposeful and prepared when harnessing generative AI for an organisations’ cybersecurity. These are actions to take now to help with a seamless transition when introducing generative AI-driven solutions and technologies:

Demonstrate executive ownership: Leaders must drive awareness and understanding of generative AI’s potential for integration into the overall cybersecurity strategy. Cultivate an open and future-ready mindset and make cybersecurity a priority agenda item for c-suite meetings. 

Foster a culture of collaboration: Collaborate with industry peers, public and private sectors, government agencies and academia to establish partnerships and share knowledge and best practices to expand collective generative AI expertise. Strengthening cyber resilience requires an “all hands-on deck” approach, especially to elevate and deploy threat intelligence. 

Incorporate cybersecurity into the organisation’s overall strategy: Cybersecurity must be part of an organisation’s overall risk strategy, not isolated from it, and considered equally as important as operational priorities. Integrate cybersecurity into data management and governance frameworks to boost cyber resilience and minimise risk posed by AI-based threats. 

Understand the organisation’s assets: Gaining a holistic view of all organisational assets helps provide a deeper insight into each system’s criticality, possible risk sources and corresponding methods to robustly safeguard them. Using generative AI to analyse and synthesise complex asset information enables prioritision of protection measures that incorporate organisational context and real-time threat intelligence. 

Practice secure-by-design: Embed cybersecurity into all projects and systems from the outset, or at the feasibility and conceptualisation stages. Cybersecurity must be a key consideration at every phase of the project lifecycle to reduce vulnerabilities and ensure optimal defences through the adoption of generative AI solutions and technologies. 

Read our latest Cybersecurity report and listen to our Transform podcast episode to gain more insights into the importance of safeguarding critical infrastructure. 

 

Author