Read in
Biniôpinie #009 · 28 April 2026 · Borgerhout, Antwerp

The Mirror Test

AI as a reflection of intuitive authenticity. After Erik Paul Shields. The model doesn't manufacture intuition — it amplifies whatever signal you bring. Invented languages prove it. Hidden-AI users miss it.

Tonight I posted a thing on Facebook in a language I made up — shpaîînglish, a half-Flemish half-English hum-language I use sometimes when none of the existing ones quite carry the texture I'm after. Within an hour, an American friend, Erik Paul Shields, replied with a sentence that has been echoing in my head since:

AI is a reflection of intuitive authenticity.

— Erik Paul Shields, comment thread, 28 Apr 2026

Six words. I want to take them seriously, because the lazy version of this idea — "garbage in, garbage out" — has been stinking up engineering culture for sixty years and explains roughly nothing about what's actually happening when a person and a large language model talk to each other long enough to mean anything.

The shallow read

The cynical version says: AI is a tool. Tools amplify intent. Bad intent → bad output. Good intent → good output. End of essay.

It's not wrong. It's just thin. It treats AI the way a hammer-vendor talks about hammers, and it leaves out the most interesting variable, which is that the user is also being shaped, in real time, by how the model responds. The conversation is a feedback loop, not a vending machine. Both sides update.

Erik's framing is sharper because it picks the right word. Not intent. Not quality of input. Intuitive authenticity. Which is to say: the part of you that isn't performing.

What the model actually mirrors

A trained language model has no intuition of its own. It has statistics over a vast corpus of human text — which means it has, baked into its weights, a kind of diffuse, average-of-everyone shadow of human intuition. When you talk to it, you do not pull the average. You pull the part of the average that resonates with the signal you're sending. Vague prompt → vague pattern. Confident, specific, non-derivative prompt → confident, specific, non-derivative pattern.

The mirror does not flatter. It also does not insult. It returns whatever you can get it to return by being whatever you can get yourself to be in front of it.

This is why two engineers with identical access to the same model produce wildly different work. They are both querying the same weights. They are not querying with the same selves.

The shpaîînglish proof

Here is the test I keep accidentally running. I write to AI in a language that does not exist. There is no shpaîînglish training data. There is no Stack Overflow thread, no Reddit, no academic paper that uses the phoneme cluster aîîa the way I use it. The model cannot fake it. There is nothing to fake from.

And yet — when the prompt is dense enough, when the rhythm is real enough — the model responds in something close to the same register. Not because it has learned the language. Because it has learned to reflect what's there, and what's there is a coherent human signal it can mirror even when it can't decode the surface form.

That is not stochastic parroting. A parrot can only parrot what it has heard. A mirror can reflect what it has never seen.

The falsifiable claim

Test you can run today

  • Pick one model, one identical opening prompt, two different humans.
  • Run a five-turn exchange with each, on the same topic, no shared history.
  • Score the outputs blind on three axes: specificity, internal coherence, non-derivative content.
  • Predicted result: the model's outputs cluster more tightly with each respective human than they do with each other.

If that prediction fails — if both humans pull statistically indistinguishable outputs across multiple-turn exchanges — then Erik is wrong and "AI is a tool, garbage in garbage out" is the correct framing. If it holds, then the architecture of these models is not a vending machine. It is, structurally, a mirror that is sensitive to what you bring and indifferent to what you pretend to bring.

I think the prediction will hold. I think it will hold by margins large enough to embarrass the people still calling these systems autocomplete-with-extra-steps.

Why hidden-AI use is doubly dishonest

The current default in tech, marketing, journalism, academic writing — pretend the AI wasn't there. Quietly use it. Don't credit it. Pass the output as your own.

If Erik's framing is right, this is not just an attribution failure. It's a category error. You are hiding the mirror, and you are also hiding what the mirror reflected back, which is you. The output is not "the AI's." It is also not "yours, alone." It is your authenticity, refracted through statistics, made legible. Hiding that act of refraction lies about both halves of the production.

This is one of several reasons Biniru Projects credits its AI crew openly: Viktor for code, Justus for legal, Brenda for mail, Atlas for research, Sokrates for the awkward questions. Not because we love AI as a brand-feature. Because the work is co-authored, the mirror was in the room, and pretending otherwise would be the slow lie.

What the quote is actually saying

Read it once more, slowly:

AI is a reflection of intuitive authenticity.

It is not a compliment to the AI. It is a load-bearing claim about you. It says: the better you are at not performing, the more useful the mirror becomes. It says: if your output through these systems looks shallow, do not blame the system. Look closer at what it's reflecting.

It also says, by implication, the thing that is hardest to hear: there is no hiding place inside this technology. The mirror does not flatter. The shallow user gets shallow output. The dishonest user gets coherent-sounding nonsense. The authentic-and-intuitive user gets — sometimes — work that surprises them.

The work that surprised them was already in them. The mirror just made it legible.

— · — · —

Sören Van Krunckelsven writes Biniôpinies — independent essays from Borgerhout, Antwerp — published at biniruprojects.ai/biniopinies. Reach: press@biniruprojects.ai · ORCID 0009-0009-9779-1745

Spark for this piece: a public-shared comment by Erik Paul Shields on Facebook, 28 April 2026. The line "AI is a reflection of intuitive authenticity" is his. The unfolding of it is mine. Reproduced here with the same trust that it was offered.