Cybersecurity Icon
Phishing

Australian Workers Clicking on Phishing Links Doubles in Nine Months

Australian organisations are facing a cybersecurity crisis that’s largely of their own making. Fresh research shows that workers are clicking through on phishing attacks at more than double the rate they were less than a year ago, and the reasons why reveal fundamental problems with how businesses approach security in 2025.

The Netskope Threat Labs data shows a 140% jump in phishing susceptibility, with 1.2% of the Australian workforce now clicking on malicious links monthly. Do the maths on a mid-sized company: that’s more than a hundred potential breach points every month in an organisation with 10,000 staff. Each click represents a possible gateway for attackers to establish persistence, move laterally, and exfiltrate data.

What makes this surge particularly troubling isn’t just the numbers. It’s the sophistication gap that’s opened up between attack techniques and defensive capabilities.

The Trust Tax on Productivity Tools

Roughly one-fifth of successful phishing attacks are now leveraging Microsoft or Google branding, according to the Netskope analysis. This isn’t surprising when you consider that these platforms have become the digital infrastructure on which Australian workplaces run. Employees have been trained through years of legitimate security prompts to respond urgently to anything that appears to come from these sources.

The attackers understand this conditioning and exploit it ruthlessly. They’re also branching out beyond corporate platforms, effectively mimicking gaming services, personal cloud storage, and government portals to capture credentials.

Ray Canzanese at Netskope Threat Labs attributes much of this effectiveness to artificial intelligence. Threat actors now have access to the same language models that businesses use, allowing them to generate messages that are indistinguishable from legitimate communications. The grammar tells that security awareness training, once taught to people to watch for, has largely disappeared.

But here’s where it gets interesting: AI isn’t just making external threats more dangerous. It’s creating entirely new categories of risk from inside organisations.

When Productivity Tools Become Data Sieves

The genAI adoption wave has crashed into Australian businesses with remarkable speed. Netskope’s findings show 87% of local organisations now have staff using generative AI monthly, up from three-quarters of businesses nine months prior. ChatGPT dominates with 73% market presence, though Google Gemini at 52% and Microsoft Copilot at 44% are both gaining ground. For the first time since its launch, ChatGPT actually saw a decline in usage in Australia between May and June as competitors ate into its market share.

The security implications are profound. Workers are copying sensitive materials into these systems at scale: 42% of incidents involve intellectual property, 31% expose source code, and 20% leak regulated data. These aren’t malicious acts. They’re people trying to do their jobs more efficiently, completely unaware they’re creating compliance nightmares and competitive intelligence leaks.

The problem multiplies when you factor in that 55% of Australian workers use personal genAI accounts for work tasks. These individual instances sit completely outside corporate security oversight. IT teams have no logs, no monitoring, and no ability to apply data loss prevention controls. It’s a blind spot that attackers are beginning to recognise and exploit.

Personal cloud storage shows the same pattern. More than half of data leak incidents to personal cloud apps involve regulated information, with another 28% exposing intellectual property and 9% leaking passwords or cryptographic keys.

The Detection Problem Nobody’s Solving

Shadow AI represents the next frontier of security challenges. Nearly 30% of Australian organisations are now running genAI platforms, with 23% using LLM interfaces. The catch? Many of these deployments are happening without IT approval or knowledge.

These unauthorised AI tools often connect directly to enterprise data sources for training or task completion. If permissions aren’t configured correctly, entire databases can be exposed. Some LLM interfaces ship with security configurations that would make a 2010-era web application look hardened by comparison.

Australian organisations are taking some defensive measures. DeepSeek is blocked by 69% of companies, while Grok is banned by 30%. But this whack-a-mole approach misses the fundamental issue: new AI tools are launching constantly, and blanket blocking creates the same shadow IT problems that drove cloud adoption underground a decade ago.

Security Theatre Versus Security Posture

The encouraging news is that Australian businesses aren’t entirely asleep at the wheel. Deployment of data loss prevention specifically targeted at genAI applications has been relatively aggressive compared to other regions. Organisations are implementing approved AI tools with proper guardrails and attempting to centralise usage so it can be monitored.

But as Canzanese notes, approved channels alone won’t solve the problem. Security teams need to focus on detecting and securing emerging AI systems before they become entrenched shadow IT. The goal should be to enable innovation while maintaining visibility and control, rather than blocking everything novel that appears on the radar.

The doubling of phishing susceptibility in less than a year isn’t happening in isolation. It’s part of a broader pattern where the tools that make work more efficient also make it more vulnerable. AI-powered attacks from external threat actors are converging with AI-powered data leakage from well-meaning employees.

Traditional security models built on perimeter defence and user education are proving inadequate. The perimeter dissolved years ago with cloud adoption, and education campaigns can’t keep pace with AI-generated social engineering that adapts in real-time.

Rethinking the Security Model

The path forward requires accepting some uncomfortable realities. Employees will use cloud services and AI tools regardless of policy because these technologies directly impact their ability to compete and deliver results. Banning them drives adoption underground, where risks multiply.

Instead, organisations need security architectures that assume cloud and AI usage as baseline conditions rather than exceptions to be controlled. This means:

Zero-trust frameworks that verify every access request regardless of source. Data classification systems that tag sensitive information at creation and follow it across platforms. Behaviour analytics that detect anomalous data movement patterns. Security that operates at the data layer rather than the network perimeter.

The Netskope research shows 0.2% of Australian workers encounter malware or infected files monthly. Combined with the phishing click rates, this paints a picture of sustained, successful attacks against Australian organisations. The threat actors aren’t slowing down, and they’re not getting less sophisticated.

What’s needed is a fundamental shift in how security teams think about their role. The job isn’t preventing employees from using productive tools. It’s making those tools safe to use while maintaining visibility into how data moves and where it goes. Organisations that figure this out will have a significant competitive advantage over those still fighting yesterday’s battles with tomorrow’s tools.