Artificial intelligence has already changed how we work in brand protection. It sorts noise, clusters data, and spots patterns. In that sense, it’s a very useful advance. But there is a growing idea in some quarters that the same technology will soon be drafting UDRP complaints. It won’t. And more importantly, it shouldn’t.

The problem is simple. AI is very good at sounding clever, but completely hopeless at knowing whether what it says is true. A well-phrased complaint isn’t worth much if the reasoning is sloppy or the evidence is simply wrong.

UDRPs are, at heart, an exercise in persuading the panellist of the merits of your case. Submissions have to be set out properly – precise, honest, and grounded in the actual evidence. If prompted well, a language model can imitate tone, but it has no concept of judgement. It can’t tell the difference between an argument that actually works and one that only sounds plausible. It mimics patterns, but it has no sense of purpose.

There is also the problem of training data. UDRP submissions aren’t published, and even if they were, I know from experience as a DRS panellist that quality varies wildly. There simply isn’t enough coherent material to teach a system what “good” looks like. You end up with a model that can copy the tone of a complaint, but has no idea what one actually is.

We are already seeing the consequences of sloppy submissions. In a recent WIPO decision – Victor Adam Bosak III v. Robert Rogers, R D Rogers LLC, WIPO Case No. D2025-4174 concerning <upscaleavenues.com> – the complainant cited several cases that didn’t exist, and several more that did exist but had nothing whatsoever to do with the arguments being advanced. The panellist suggested this had all the hallmarks of AI hallucination. And it’s worth reading their comments in full:

…Complainant cited several nonexistent cases to the Panel in support of its case.  Complainant also cited several actual cases that do not stand for the proposition for which they were cited.  If there was one errant citation or something that could be chalked up to momentary carelessness, the Panel might be disposed to overlook the error.  But the sheer quantity of fake case citations compels something more than a shrug.  The Panel is mindful of some recent instances where lawyers have been caught citing fake cases to courts of law in the United States, and in these instances it has been claimed that Artificial Intelligence (“AI”) programs were used by the lawyers, and the work product thereby obtained was rife with so-called AI hallucinations.  Perhaps that is what happened here.  Assuming so would be the most charitable interpretation the Panel could place on Complainant’s submissions to the Panel.  Even if this were the explanation, the Panel would still condemn the decision to submit what Complainant submitted without doing some measure of verification that the cases cited were actually genuine and stood for the propositions advanced by Complainant in aid if its case. The failure to perform such due diligence (which, again, is the most innocuous construction the Panel can assign to these circumstances), and the resulting suite of massive and misleading errors in the materials submitted here for consideration, cannot be countenanced.

The complainant lost the case, with the panellist finding them guilty of reverse domain name hijacking. In plain English, this is the panellist taking the submission outside and giving it a well-deserved kicking.

Judging by what I and colleagues are seeing, it’s clear we’ll see more attempts at sloppy submissions.

I recently watched a demonstration of a commercial system – which shall remain nameless – which was being sold as an AI drafting solution. The promises were impressive. The output was not. Every case citation was hallucinated. Not misused or a bit clumsy, but simply invented.

The draft survived a cursory glance, but the moment you checked anything, it unravelled completely. The fatal flaw is obvious: some of these systems have been built by people with no practical experience of ever having drafted a dispute submission.

What worried me more than the hallucinations was the sales pitch surrounding them. The emphasis was entirely on output. Nowhere did anyone mention checking citations, verifying reasoning, or building in any sort of review process. The implication was that you could press a button and file whatever came out. The upscaleavenues.com case makes it abundantly clear that this sort of carelessness is an excellent way to lose a client’s dispute – and your reputation along with it.

So no, AI will not be writing your next UDRP. Not today, not tomorrow, and, given the nature of the task, perhaps not ever. There is too much nuance, too much context and too much professional responsibility involved.

I’m no Luddite, and industry friends and colleagues will confirm I’m an incorrigible geek. I find AI genuinely fascinating, which is why it’s so frustrating to see people concentrate on the wrong end of the process. Let the machines handle the dull work – the pattern matching, the data gathering, the evidence collection. That’s where they actually add value. But the strategy, the judgement, the argument and the accountability should remain firmly with us humans.

Technology can speed up the work, but it can’t do the thinking. Panellists expect clarity, accuracy and sound reasoning, and that still comes from people. There are no shortcuts to experience, judgement or the nuance required to fit the facts of each case. The human factor isn’t an add-on. It’s the point.

[Photo credit: ChatGPT, prompt by author. And, yes, using AI for the image was meant ironically]