2025 cybersecurity trends: deepfakes, AI and quantum computing

Classic attacks such as ransomware will persist but artificial intelligence is completely changing the cybersecurity landscape

Cyber security trends 2025

As the old saying goes, only two things are certain in life: death and taxes. If that phrase were refreshed for the 21st-century, we might add hair-raising cybersecurity incidents to the list of life’s certainties. Rarely a week goes by without reports of a data breach, supply-side attack or some other business-crippling ordeal.

Lucrative low-hanging-fruit attacks, such as phishing and ransomware, will continue in 2025. But the capabilities of attackers are evolving at a tremendous pace, changing the scale at which traditional attacks can be launched and leading to the emergence of new threat actors.

This is in large part thanks to advances in generative AI. Just as organisations are using GenAI to enhance productivity, so too are hackers. GenAI is enabling cybercriminals to gather intelligence quickly and efficiently, and to create more sophisticated attacks, such as deepfakes, with ease. Attacks once required a considerable amount of time and investment; perpetrators had to identify high-value targets, study patterns of communication and research company documents, for instance. But machines can now complete this prep work in a fraction of the time.

In cybersecurity, knowing what to defend against is half the battle. Here are the major cybersecurity trends that businesses must prepare for in the coming year.

AI compromise attacks

Businesses are growing increasingly reliant on AI systems. But as firms build the technology into their workflows, they are creating larger and more complex attack surfaces that are trickier to mend should they be breached.

Organisations that are compromised via components in their AI systems may find it difficult to trace the entry point of such attacks, warns Bharat Mistry, field CTO at Trend Micro, an IT security company. This will make discovering these breaches much more challenging.

Mistry believes attackers will soon begin targeting AI models themselves, if they are not already doing so. Cybercriminals could infiltrate a highly complex organisation and corrupt its AI systems with dodgy data. After a brief period of havoc, the criminals would inform the organisation that they were responsible for the attack and demand a ransom to restore operations.

“Dependence on AI systems is becoming so high that this could cause real problems,” Mistry says. Even with powerful ransomware attacks, businesses were able to make last-ditch, paper-based contingency plans to stay operational. But operating on analogue, even temporarily, will be almost impossible as organisations become increasingly dependent on AI.

“You’re not going to know how far the data has been corrupted,” Mistry continues. “If you did manage to roll back with AI, the problem with automation is it’s no longer just one user on a system, but multiple linked systems. How do you get a handle on that?”

Attackers could also add an ‘extra layer’ to GenAI tools, enabling them access to all of the data entered into the system. In this case, the model would appear to operate normally; users would have no reason to distrust the tool and might upload all sorts of confidential information. But if a malicious actor has added a ‘man-in-the-middle’ on the user’s device, all the data fed into it will pass into the hands of the attacker. Employees working remotely are especially vulnerable to this type of breach.

More sophisticated deepfakes

The use of deepfakes – fictitious but convincing images or videos of real people – is on the rise. In fact, a 2024 Ofcom report found that 60% of people in the UK have encountered at least one deepfake. By 2026, 30% of organisations will consider their current authentication or digital ID tooling inadequate to fight deepfakes, according to Gartner, a research consultancy.

Next year may be the year deepfakes become mainstream. It’s a big concern for Marco Pereira, global head of cybersecurity at Capgemini, an IT company. “If you have someone on a video call that looks like the CEO, sounds like the CEO, has the right background – all it takes to fool you is them saying, ‘Oh, my camera is not working well’,” he explains.

Deepfakes once came with tell-tale signs that users were speaking with a digital impostor – say, glitching speech or a nose floating uncannily out of place. But as the technology improves, deepfakes are becoming significantly harder to spot.

This is bad news for businesses, which are already being targeted in customised phishing attacks that use the technology. Examples of successful deepfake attacks have made headlines. An employee in Hong Kong, for instance, transferred about £20m to cyber attackers after being bamboozled by a deepfake posing as a senior executive.

Pereira adds that, for cybercriminals, a simple cost-benefit analysis shows that attacks on high-value targets are worth the trouble. “Sophisticated deepfake whaling attacks might require investment but the benefit is very high,” he says. “We’re going to see a lot more high-fidelity deepfake attacks in the future.”

Metadata – a long-standing privacy problem

Metadata is data about data. The content of a text message is data. Metadata includes information such as when the message was sent, where it was sent from, who sent it and to whom.

One piece of metadata on its own is pretty much worthless. But when volumes of metadata are analysed by machines, patterns emerge that are sometimes more revealing than the contents of the messages alone. This sort of data was being hoovered up by the Five Eyes – the intelligence agencies of the US, Canada, the UK, Australia and New Zealand – as exposed in the Edward Snowden leaks.

According to Christine Gadsby, chief information security officer at BlackBerry, metadata surveillance and protection will be a major trend going into 2025. Because metadata is part of the ebb and flow of daily internet traffic, it is incredibly difficult to secure. How do you guard seemingly harmless scraps of information?

“People are still leaning on the guidance of encrypted communication,” says Gadsby. “This does secure part of the problem, but it leaves open the metadata portion. Your IP address is still exposed and your location can be accessed. Nation-state attackers are going to use that, including in times of war.”

Large metadata attacks are already underway. For example, several US telcos are fending off an enormous hack orchestrated by a Chinese group called Salt Typhoon, which is targeting the metadata of millions of Americans. 

Gadsby adds that because metadata is the language of machines, computing tools are very good at gathering and making sense of it. “AI will be able to connect point A to B to C to D and enable attackers to link this data to individuals,” she warns. “What would have taken a human two years to analyse will take two minutes with AI.”

Deeper decentralisation for attackers

Cybercriminals already organise complex supply chains where each actor or group has a specific role to play. A successful ransomware attack, for instance, involves ‘access brokers’ – the people who open the proverbial door to the target organisation, for a price – an array of technical specialists and even C-suite-style executives.

Mistry believes that cyber attackers will become increasingly specialised, as the technical systems they use for attacks, such as large language models, grow more complex.

“The whole cybercriminal community is shifting towards a model of discrete enterprises,” Mistry says. “They already do bespoke attacks but they’ll probably take this even further next year.”

Although defenders are developing different skills and tools to combat the overwhelming number of threats, attackers are improving their own capabilities too. Mistry expects this trend to continue, as it’s difficult to imagine any one criminal being able to mastermind complex, expansive attacks. As these criminal networks grow increasingly specialised and decentralised, policing them will become much trickier.

Store now, decrypt later

Encryption made the modern digital economy possible. None of us would enter our credit card numbers into Amazon, for instance, if they were stored in plain text, available for anyone to view. Instead that data is encrypted – scrambled by and made accessible only with a secret key. Almost all of our sensitive digital data is protected this way.

But what if that encryption were broken overnight? At the dawn of the quantum-computing era, this is a very real possibility. Roberta Faux, head of cryptography at Arqit, a post-quantum security firm, says ‘Q-Day’ – the point when quantum computers can break current encryption processes – may be only a few years away.

Although quantum computing is still in its infancy, an important algorithm integral to its functionality is now able to calculate the integers of prime factors quicker than any computing system available today. This means that the complex series of numbers that underpins the cryptographic systems, on which we all rely, could be cracked quickly and easily.

With these capabilities on the horizon, it is logical for attackers – particularly nation-state adversaries that are developing their own quantum systems – to collect encrypted data now, which they can decrypt at a later date when the technology is ready.

“Technologically advanced nation states are investing heavily in quantum research and cybersecurity, and are likely harvesting encrypted data now, expecting quantum computers to decrypt it in the near future,” Faux explains. “Sensitive long-term information like military plans, intellectual property and personal records is at particular risk – anything sent over public networks may be vulnerable.”