Rethinking Cybersecurity and Data Design in the Age of AI Agents
- Mignon Green - Regional Manager (BOP & Waikato, NZ)
- Sep 2
- 4 min read
Updated: Sep 15
According to Kordia’s 2025 report, 59% of medium and large Kiwi businesses were hit by a cybersecurity incident in 2024, with 43% of those via phishing.
The National Cyber Security Centre’s NZ’s Q1 2025 report shows $7.8 million in direct financial loss, up 14.7% from Q4 2024. Online scams and business email compromise (BEC) accounted for the bulk of that impact.
Over the 12 months to September 2024, Payments NZ reported nearly $200 million lost to scams, a number likely under-stated.
For years in New Zealand, protecting your digital assets has been about getting the basics right:
Strong online authentication
Role-based access
Firewalls and encryption
Bot detection
Secure development practices
These fundamentals still matter. But they’re not tick boxes and they’re no longer enough.
Because in 2025, the biggest risk to your data may not be the obvious hacker. It could be an AI agent logging in with legitimate user credentials, scraping your most valuable data at speed, at scale, and without tripping a single “suspicious login” alert.
The Problem No Firewall Will Catch
We’ve entered into an era where:
AI tools can be configured to act like human users - logging in, reading, and extracting data without breaking any login rules.
The scraping is silent. No brute force, no firewall trigger - just a “user” who happens to consume your data 1,000x faster than a human could.
Data behind paywalls or portals, once considered secure, is now vulnerable to automated extraction at scale, even with “legit” logins.
And here’s the kicker: Many businesses only just finished locking content away behind logins and paywalls to protect revenue from open-web scraping by search engines and generative AI. That move made sense in a world of Google Search cannibalisation. But it may be creating a new blind spot: hidden, high-value data, sitting right where AI agents can vacuum it up undetected.
The Double Bind: Visibility vs Protection
Here’s the tension most organisations are stuck in:
If you leave content open, AI tools and search engines can surface it—driving visibility but potentially eliminating traffic to your site entirely.
If you lock content down, you might protect short-term traffic, but you also create a rich dataset sitting in one place for an authenticated AI agent to quietly exfiltrate.
The result? You’re protecting yourself from the wrong era’s threat model.
Bot Detection Won’t Save You
Traditional bot detection looks for obvious tells:
Abnormal click rates
Known bad IPs
Headless browser signatures
AI agents configured to mimic legitimate sessions won’t trip those wires. They:
Operate from clean IP ranges
Stay within API rate limits
Obey robots.txt (until instructed not to)
Distribute requests over multiple accounts
Leverage VPNs to look like they’re connecting from safe locations
If your design philosophy hasn’t adapted to that, you’re not protecting data, you’re just slowing down yesterday’s bots.
What Enterprise Websites and Portals Need to Do Differently Now
Re-think Data Exposure at the UX/UI Level
Design with “partial exposure” in mind - show enough to add value, but control depth of access based on trust signals, context, or progressive disclosure.
Consider decoy data or watermarking for high-value datasets to detect and trace misuse.
Track Consumption Patterns, Not Just Logins
Behavioural anomaly detection is your new best friend: watch for scraping patterns inside legitimate sessions.
Flag abnormal velocity, breadth of access, or sequence patterns that don’t look human.
Re-evaluate Paywalls and API Design
APIs can be locked down to serve minimal, fragmented payloads per request—making large-scale scraping slow and costly.
For web content, stagger access or blend human verification steps where practical.
Implement Content Fingerprinting
Embed invisible markers in your data to identify when and where it surfaces externally - similar to how digital publishers track stolen content.
Create a Clear “AI Use Policy” for Your Data
Explicitly state whether AI scraping is permitted, and in what form.
Enforce through rate limiting, T&Cs, and, when necessary, legal action.
The Mindset Shift
This is bigger than cybersecurity. It’s about data stewardship in an AI-first world. Design decisions can’t just be about keeping attackers out, they must also anticipate the AI agents you’ll let in without knowing it.
That risk is amplified by the ease with which AI tools can be connected out-of-the-box to enterprise systems - no coding skills required - making data exposure less about malicious outsiders and more about employees opening new doors by accident.”
That means asking:
What happens if this data is consumed in bulk, instantly?
Do our current controls detect that scenario?
Are we thinking about user accounts as potential supply chains for AI agents?
Final Thoughts
The next wave of digital resilience will be built by those who:
Keep the basics airtight
Anticipate the new class of “legitimate credential” AI users
Design their digital experience to balance visibility, value, and vigilance
Build executive level security leadership and embed security awareness training
Because in the age of AI, the question isn’t whether someone can log in, it’s what they can do once they’re in, and how fast they can do it.
MomentumIQ offers a free initial cybersecurity consultation to help you assess and strengthen your digital defences. Whether you want to align with OWASP best practices, implement NIST-recommended controls, improve bot and AI agent detection, or leverage our AI Adoption Framework to rapidly get the right policies and governance in place, we can help you move fast, with confidence.
MomentumIQ can audit, design, implement, and even operate your response plan.
Let’s start with a conversation that turns risk into resilience.
Contact us today to book your session.