Regarding cybersecurity in 2025, many analysts and experts focus on artificial intelligence.
Both attackers and defenders will use AI, but attackers will benefit more, said Willy Leuchter, chief marketing officer at AppSOC, an application security and vulnerability management provider in San Jose, California.
“We know that both sides of the cyber war will increasingly use AI,” he told TechNewsWorld. “However, attacks will still be limited because they are concerned about the accuracy, ethics, and unintended consequences of AI. Techniques like personalized phishing and surveillance networks to patch outdated vulnerabilities “will take advantage of AI.”
“While AI has tremendous defensive potential, there are many legal and practical challenges slowing down adoption,” he said.
Chris Hauck, a consumer privacy advocate at Pixel Privacy, a publisher of online security and consumer privacy guides, predicts that 2025 will be the year of AI vs. AI, as the good guys use AI to defend against AI cyberattacks that work.
“It will likely be a year of constant battles as both sides use information gained from previous attacks to develop new attacks and defences,” he told TechNewsWorld.
Mitigating AI’s Security Risks
Leuchter also predicted that cybercriminals will begin to target AI systems more frequently. “AI technologies are vastly expanding the attack surface with rapidly emerging threats based on models, databases, and machine language techniques,” he explained. “Additionally, as AI applications rush from the lab to production, the full security impact will not be understood until the inevitable breach occurs.”
Carl Holmquist, founder and CEO of Lastwall, a Honolulu-based privacy company, agrees. “The unchecked mass adoption of AI tools, often produced without a solid security foundation, will have serious consequences by 2025,” he told TechNewsWorld.
“Without adequate privacy measures and security systems, these systems will become prime targets for hacking and manipulation,” he said. “This Wild West approach to AI deployment will put data and decision-making systems at risk, forcing organizations to quickly prioritize basic security measures, effective AI models, and ongoing monitoring to mitigate the growing risk.”
Leuchter also insisted that by 2025, security teams must take greater responsibility for the security of AI systems.
“This seems obvious, but in many organizations, early AI projects are being driven by data scientists and business experts, who often bypass traditional application security processes,” he said. “Security teams will be fighting a losing battle if they try to stop or slow down AI initiatives, but they must bring flawed AI projects under the umbrella of security and compliance.”
Leuchter also noted that by 2025, AI will expand the attack surface of attackers targeting software supply chains. ” he said. “Understanding numerical patterns and maintaining the integrity of data transformations is a difficult problem. There is currently no reasonable way for an AI model to learn toxic data,” he added.
Data Poisoning Threats to AI Models
Michael Lieberman, CTO and founder of Kusari, a software security company based in Ridgefield, Connecticut, also believes that coding large language models will significantly advance by 2025. “This approach will likely require more resources than simple tactics like distributing malicious open source LLMs,” he told TechNewsWorld.
“Most agencies don’t train their models,” he said. “Instead, they rely on pre-trained models, often available for free. The lack of transparency about the origin of these models makes it easier for attackers to inject malicious models, as demonstrated by the Hugging Face malware incident.” That incident occurred in early 2024 when the Hugging Face website was found to have installed 100 LLMs containing hidden backdoors that could execute arbitrary code on users’ machines.
“Future data poisoning attempts will likely target large players like OpenAI, Meta, and Google, who train their models to collect data, making these attacks difficult to detect,” predicts Lieberman.
“By 2025, attackers will likely be faster than defenders,” he added. “Attackers are budget-conscious, while defenders often struggle to find adequate budgets because security is not typically seen as a revenue source. It may take a critical disruption to the AI supply chain — like the SolarWinds Sunburst incident — to prompt the industry to take the risks seriously.”
Thanks to AI, there will also be more threat actors in 2025 who will conduct more sophisticated attacks. “This is another acceleration in the speed of attack execution,” said Justin Blackburn, senior cloud threat detection engineer at AppOmni, a SaaS security management software company in San Mateo, California.
“Additionally, the emergence of AI-powered bots will allow attackers to conduct large-scale attacks with minimal effort,” he told TechNewsWorld. “Armed with these AI-powered tools, even low-powered attackers will be able to advance unauthorized access to sensitive data and disrupt services on a scale that only the most sophisticated and well-funded attackers could previously afford.”
Script Babies Grow Up
By 2025, the emergence of agent AI — AI that can make autonomous decisions, adapt to its environment, and take action without direct human intervention — will also compound defenders’ challenges. “Advances in AI are expected to enable non-state actors to develop autonomous cyber weapons,” said Jason Pittman, an associate professor at the School of Cybersecurity and Information Technology(CIT) at the University of Maryland Global Campus in Adelphi, Maryland.
“Agent AI operates autonomously with guided behaviour,” he told TechNewsWorld. “Such systems can use cross-cutting algorithms to identify vulnerabilities, tamper with systems, and develop strategies in real-time without human intervention.”
“These features distinguish it from other AI systems that rely on predefined instructions and require human intervention,” he explained.
“Like the Morris worm of previous decades, the release of cyber-agent weapons could start accidentally, which is problematic. This is because the availability of advanced AI tools and the proliferation of open-source machine learning methods have lowered the barrier to developing modern cyber weapons. Once created, an autonomous model could easily lead to AI that evades security measures.”
As AI puts harm in the hands of bad actors, it can also help better protect data like personally identifiable information (PII). “After analyzing over six million Google Drive files, we found that 40% of the files contained personally identifiable information that puts businesses at risk of data breaches,” said Rich Vibert, founder and CEO of Metomic, a London-based data privacy platform.
“As we move into 2025, we’ll see more companies adopting automated data classification systems to reduce the amount of sensitive information that’s accidentally stored in public folders and collaborative workspaces in SaaS and cloud environments,” he continued.
“Companies will increasingly use AI-powered tools that can automatically identify, label, and protect sensitive information,” he said. “This change makes it easier for companies to keep track of the large amounts of data generated daily while ensuring that sensitive data is always protected and minimizing the collection of unnecessary data.”
But 2025 could also bring a wave of disappointment among security professionals as the hype around artificial intelligence reaches mainstream popularity. “CISOs will reduce their use of AI by 10% due to lack of measurable value,” Cody Scott, a senior analyst at Forrester Research, a market research firm based in Cambridge, Massachusetts, wrote in a blog post.
“According to Forrester’s 2024 data, 35% of global CISOs and CIOs consider exploring and implementing next-generation AI use cases to improve employee productivity a top priority,” he noted. “The security market has quickly overhyped the expected productivity benefits of AI, but the lack of positive results adds to the disappointment.”
“The idea of an autonomous AI-powered security operations centre has generated a lot of hype, but it’s very far from the truth,” he continued. “In 2025, this trend will continue, and security professionals will be deeply depressed as issues like insufficient budgets and unproven benefits of AI reduce the number of security-focused AI generations.”