For years, the internet trained us to search. We typed keywords, scanned a page of links, compared sources and made up our own minds. That old ritual was imperfect, but it contained a small civic virtue: it forced us to see that knowledge is contested, partial and something to be checked. Now, increasingly, we do not search. We ask. And a machine answers.
That shift matters more than it first appears. In New Zealand, marketers are already openly discussing Generative Engine Optimisation, or GEO, and the related tactics of Answer Engine Optimisation: how to structure content so that tools such as ChatGPT, Perplexity, Google AI Overviews and AI Mode are more likely to cite a brand or surface it in an answer. The Marketing Association has published New Zealand-specific advice on “dominating GEO” and runs training that includes tactics to get brands “featured in generative search”.
At one level, that is simply the next stage of digital communication. Organisations want to be found. They want to explain themselves clearly. They want machines, as well as humans, to understand what they do. There is nothing inherently sinister about that. But every technology of visibility carries a temptation. Once appearing in AI answers becomes commercially valuable, the pressure grows to produce not merely better information, but more strategically repeated, more machine-friendly, more consensus-looking information.
That is where optimisation can become something darker: not persuasion, but pollution. The danger is not just that AI systems may repeat a false claim. It is that they may begin to mistake repetition for reality. New Zealand marketing material already frames answer engines as systems that reward “third-party validation”, “entity association”, “structural clarity” and “cross-web consensus”. In other words, visibility is increasingly tied to whether the web appears to say the same thing about you often enough, in enough trusted-looking places. That may reward quality. But it may also reward whoever can manufacture the strongest illusion of agreement.
For New Zealand, this is not just a technical or commercial issue. It is an issue of national knowledge. The government’s AI strategy, released in 2025, explicitly says New Zealand should emphasise AI “adoption and application rather than foundational AI development”. That may be economically pragmatic. But it also means we are, by design, becoming users of systems largely built elsewhere, trained elsewhere, and shaped by assumptions that are not always ours.
That would matter less if those systems were naturally good at understanding New Zealand. But official public-service guidance says the opposite. Digital government guidance warns that generative AI can produce inaccurate and incomplete outputs, may not comprehend real-world contexts, nuances in language, cultural references or intent, and should “never” be treated as authoritative without verification. The same guidance says disinformation is a core issue and a National Security Intelligence Priority in New Zealand.
That warning should provoke a deeper question: what happens when a small country with a distinctive culture, a bicultural foundation, and a relatively modest media ecosystem starts relying on systems that can sound fluent without truly understanding local context? The risk is not only factual error. It is interpretive drift. It is the possibility that New Zealand realities, institutions, places, names, histories and communities are gradually flattened into whatever version of them is easiest for a machine to assemble.
In that sense, AI search poisoning is not merely about “bad content”. It is about the possibility of outsourcing epistemic authority: letting the internet’s most machine-readable voices become the ones that define what feels true about this country. In a larger market, there may be enough trustworthy local material to resist distortion. In a smaller one, the margin for manipulation is thinner.
And there is little comfort in imagining this as a hypothetical threat. RNZ reported in February that “fake NZ news” pages were flooding Facebook with AI-generated images and videos, including manipulated local political content and grotesquely animated material tied to real New Zealand events such as the Mount Maunganui landslide. These pages borrowed the appearance of local relevance while hollowing out the substance. They did not need to be persuasive in a deep sense. They only needed to look plausible enough, often enough, to circulate.
Netsafe’s recent discussion paper on AI and online safety reaches a similar conclusion from another angle. It argues that AI-driven harms have local effects that require inclusive, contextually informed responses, and says more work is needed to understand how those harms may affect Māori, Pasifika and other underrepresented communities in New Zealand. That is important, because a system does not need to produce a spectacular falsehood to do damage. It can harm simply by consistently centring some voices, some norms and some kinds of evidence while thinning out others.
The Commerce Commission has already signalled that this terrain is not outside existing law. In its 2025 paper on AI, it warned that AI can facilitate fake reviews at scale, fake content, and highly personalised, potentially manipulative marketing that may cross the line into misleading or deceptive conduct under the Fair Trading Act. That matters because New Zealand already has a domestic example of the underlying logic. In December 2025, the District Court found that The TV Shop had used undisclosed staff reviews and systematically lowered the availability of negative information about its products. The Commission’s message was blunt: reviews must be genuine and consumers should be able to trust what they are reading.
AI search poisoning is, in part, an industrialised version of that same problem. It is the move from misleading individual consumers on a product page to shaping the information environment that informs the machine itself. The target is no longer just the buyer. It is the system that mediates what buyers, voters, patients, students and citizens are likely to encounter first.
This is why the issue deserves to be treated as more than a quirky development in search marketing. What is at stake is not only whether a chatbot gives a wrong answer. What is at stake is who gets to pre-arrange the field from which answers are built. If enough synthetic content accumulates, the machine’s smooth confidence can disguise a deeper failure: not that it has no sources, but that its sources have been strategically engineered to make one version of reality easier to retrieve than another.
New Zealand should be especially alert to that danger because we are at a vulnerable intersection: a country actively encouraging AI uptake, a society already seeing AI-generated misinformation circulate locally, and an information ecosystem small enough to be more easily distorted.
The real question, then, is not whether New Zealand should use AI. That debate is already over. The question is whether we are willing to defend the conditions under which truth is formed in public life. Do we want a future in which trustworthy local knowledge remains rooted in accountable institutions, journalism, communities and human judgment? Or are we prepared to let authority drift toward whatever content has been most effectively optimised for machine consumption?
That is the unsettling possibility behind AI search poisoning. It does not arrive as censorship. It arrives as convenience. It does not silence other voices outright. It simply makes some voices easier for the machine to hear. And in the AI era, what the machine hears first may increasingly shape what the country comes to know.
※新西兰全搜索©️版权所有
敬请关注新西兰全搜索New Zealand Review 在各大社交媒体平台的公众号。从这里读懂新西兰!️
了解 新西兰全搜索🔍 的更多信息
订阅后即可通过电子邮件收到最新文章。
