Skip to content
BusinessTechNews

Cybersecurity Predictions 2025 from Industry Leaders on the Future of Digital Defense

As we step into 2025, the cyber threat landscape continues to evolve at an unprecedented pace. Two prominent voices in the cybersecurity realm—Tom Holloway, Head of Cybersecurity at Redcentric, and Alastair Paterson, CEO and Co-Founder of Harmonic Security—share their insights on what lies ahead. From the integration of AI in security operations to the growing emphasis on data protection and compliance frameworks, their predictions paint a vivid picture of the opportunities and challenges shaping the future of cybersecurity.

Tom Holloway, head of cybersecurity at IT and cybersecurity service provider, Redcentric.

Says: "We will see an increasingly wide use of AI across many areas. As businesses and the public sector recognise the value brought by AI, there will be be an expansion of cloud computing, with on-prem (local software that is installed and hosted in a company’s own IT environment) largely unable to provide the technological edge in a rapidly moving market.

“AI will also be used by adversaries, and we will see increasingly sophisticated phishing attacks as sentiment and tone are better emulated. AI will also be used by adversaries to identify exploitable vulnerabilities. As a response to the increasingly sophisticated attacks, we will see the expansion of zero trust architecture; an approach that means that the network is assumed hostile and therefore every request must be verified.

“We will also see increasingly effective collaboration between national and international law enforcement agencies and technology businesses in disrupting cyber-criminals.  Security of the Internet of Things devices will remain challenging and I suspect that we’ll see a preference for using hardware produced by friends and allies, over cheaper and perhaps compromised devices from the Far East.

“In the face of these rising threats, we will see a significant premium for secure software development, a growth market for those with the necessary skills. Data scientists will be worth their weight in gold. We are likely to see ongoing concentration across the technology sector, with novel start-ups being rapidly acquired for their niche solutions to the problems of the future.”

Alastair Paterson, CEO and co-founder of Harmonic Security.

Says: I think there are five themes that are likely to rear their heads over the next 12 months:

· Compliance looms large. Incoming AI compliance frameworks are going to cause a headache for organizations.

·  Gradual shifts in third-party risk. This won’t be fixed overnight, but we’re going to see a step-change in how companies manage third parties – especially with regard to AI.

· “Security for AI” improvements. The tech companies behind the models will get even better at securing them and offsetting model attacks.

· Data is the latest perimeter in vogue. With plenty of funding behind it, expect data security to be the talk of 2025.

·  AI for security operations goes mainstream. AI agents for SOC automation represent a huge opportunity for security teams looking to achieve real productivity boosts.

Compliance for AI is something we’re going to hear a lot about in 2025. The EU AI Act is the obvious candidate, but there’s plenty to pay attention to in the US. National regulatory initiatives, such as the proposed SEC rules need attention but there is a growing patchwork of state-level legislation, such as Colorado’s Artificial Intelligence Act. This is just one example; there are no fewer than 15 US states that have enacted AI-related legislation, with more in development.

We’re long-due a shift in how we manage third parties. This is one of many pre-existing problems in the industry that AI has shone a light on. Going forward, I’d wager that we’re going to be speaking less about the “AI problem” and rather a third-party risk problem. Sure, you can block ChatGPT and buy an enterprise subscription to Copilot, but are you really going to block Grammarly, Canva, DocuSign, LinkedIn, or the ever-growing presence of Gemini through your Chrome browser?

As more organizations choose to buy rather than build, there’s going to be an awful lot of AI to manage. Yet (crucially) the way we do third-party risk fundamentally doesn’t work. We’re still sending rigid questionnaires that give the facade of security. We’re going to see frameworks emerge that will give these teeth. The Digital Operational Resilience Act (DORA) introduces industry-specific requirements that intersect with AI use, particularly in financial services and other regulated sectors

When it comes to securing GenAI, one area that is still underserved is data protection. In reality, most approaches to this are using legacy technologies and approaches like regular expressions or labelling.

Data security has probably been underserved for the last decade, but some recent money and investments in the space – along with the need – will catapult it to be spoken about a good deal over the next 12 months.

Latest