
Apr 20, 2026

Your AI tool just generated a beautiful interface. Users hate it. What went wrong?
A growing number of enterprise teams are discovering that AI can produce something that looks like design. Mockups materialise in minutes. Stakeholders nod. Development starts. And then — quietly, expensively — things fall apart.
This isn't an argument against AI. It's an argument for knowing exactly what AI cannot do, so teams stop paying for that confusion later.
The laws machines follow but can't feel
UX isn't opinion. The principles that make interfaces work have names and are supported by decades of research into how the human eye feels thinks and perceives them.
An AI tool can be trained on these principles. But here's the uncomfortable question: can it learn a law it's never broken?
Great designers learn these rules through failure…
They observe users hesitating, clicking the wrong thing and ultimately giving up. This experience doesn’t reside in a model’s weights or the AI being trained; it resides in human judgement.
In enterprise products serving 50,000 users with wildly different mental models, a rule isn't enough. Judgement is.
The Laws That Live in a Designer’s Hand
The issue isn’t that AI lacks knowledge of these laws; it does. Simply ask any AI tool like Copilot or ChatGPT to explain Miller’s Law and you’ll receive a textbook answer. However, the problem lies in the distinction between knowing a law and feeling its violation.
Miller's Law — The 7 ± 2 Problem

Miller’s Law suggests the human brain can process about seven items simultaneously in working memory. AI will faithfully render all the features your requirements document requested – twelve filters, eight navigation items and five data columns – without realising your user’s mental bandwidth has already been stretched thin three screens ago. The user isn’t interested in exploring further into the app.
A designer who has witnessed someone freeze at an overcrowded dashboard instinctively knows this number. They push back before the build. This pushback isn’t dictated by the prompt but stems from their experience and a genuine understanding of the context and different audiences.
Law of Proximity — Grouping Is Meaning

The Law of Proximity says closeness communicates relationship. AI can align components on a grid. What it can't do is understand that two panels sitting side-by-side mean completely different things to a technology user versus a field-level user encountering the system cold.
Proximity isn't placement. It's a conversation with your user's mental model and what they think and feel. You need to know the user to have that conversation.
AI sees the fifth feature request. A designer feels what it does to the first four.
None of this is about intelligence. AI is not unintelligent. It's about experience as a sensing instrument — the kind built from watching real users hit real walls.
AI knows the rulebook. It has never felt the cost of breaking one.
A real story: when "good enough" became expensive
One of business clients had a genuinely exciting problem. Their people were drowning in Excel — hundreds of rows, dense filters, pivot tables held together by institutional memory — just to find one thing: the right emerging technology to solve a specific business challenge.
They had the goal statement and requirements document. They fired up Copilot, typed a few prompts, and the mockups looked impressive. Leadership nodded, and development began.
However, no one paused to consider who the designs were intended for. The excitement was so great that they completely forgot about the possibility of different interface flows for various user types.
When the app launched, technology-side users adapted quickly. They'd lived in the Excel version. They understood the underlying data model and could read between the lines of the interface — because they already had the mental map.
But - Field-level users opened the same application and hit a wall!
The navigation assumed knowledge they didn't have. The filtering logic mirrored the Excel structure it was supposed to replace. Labels made sense only if you already knew the answer.
Confusion spread. Workarounds appeared. Such people, type quietly, went back to their spreadsheets or even abandoned them simply to get back to older ways.
The application hadn't failed technically. It had failed humanly.
The design debt nobody budgeted for
This is when the designer finally got pulled in — not at the start, where they belonged
As you know - as a firefighter :)
What they found was the signature mess of AI-assisted, designer-absent product work:
Components that looked similar but behaved differently — generated by prompts with no shared design logic
Navigation built for expert users, with zero scaffolding for someone new to the problem space or, in simple terms, users who want pretty specific solutions to their set of problems
There was not sign of progressive disclosure — everything surfaced at once, because the requirement said "show the data", not "guide the user through it”. As the saying goes - the more items, harder it becomes for the mind to understand..
Fixing it wasn't a visual refresh. It meant going back to the foundational question the team had skipped entirely.
What skipping design actually costs:
↳ 3 weeks saved upfront
↳ 3× the cost to course correct
↳ User trust — hardest to recover
Design debt is deferred design thinking. In enterprise, it collects interest fast.
What AI is actually good for (honest take)
Artificial intelligence is genuinely beneficial in the design process, offering rapid prototyping, variant generation, accessibility checks and scalable pattern libraries. These tools are real and their value is undeniable. Furthermore, they’re becoming increasingly sophisticated.
Platforms like Google Stitch and Claude’s own design capabilities are evolving into AI-native environments. This means anyone, whether a designer or not, can create, iterate and collaborate on high-fidelity user interfaces without relying on traditional design tools like Figma. This is a significant development.
For example, a product manager can sketch a working prototype in an afternoon and a developer can generate component variants without needing a handoff meeting. So does this make designers redundant? Quite the opposite. These tools are eliminating the grunt work – the repetitive, mechanical design execution that consumed hours without requiring judgement. What remains is precisely the work only a designer can do.
This includes knowing which prototypes to discard, sensing when a high-fidelity UI is technically beautiful but cognitively flawed, and understanding why a field-level user might approach a screen differently than the person who designed it.
No platform currently generates this level of insight. It won’t be soon either.
Conclusion — The Field Spoke. Are You Listening?
"The extreme AI optimists were wrong. So were the extreme pessimists. The truth — as always — lives in the design."
If you still need convincing, don't take my word for it.
In their State of UX 2026 report, one of the most respected voices in the field made something plainly clear: UI is no longer the differentiator — and surface-level design won't be enough to stay competitive. That's not a warning for designers. That's a warning for every enterprise product team that thought shipping faster with AI-generated mockups was the smart play.
Think back to that application I talked about earlier. The one that worked beautifully — for the people who already knew how it worked. The one that left field users stranded, confused, and quietly drifting back to their spreadsheets. That wasn't an AI failure. That was a judgement failure. A human oversight gap dressed up in clean-looking interfaces.
NN/G puts it precisely: anyone will soon be able to make a decent-looking UI…
What isn't easy to automate is curated taste, research-informed contextual understanding, critical thinking, and careful judgement.
That's the designer. Every single time…
In enterprise terms: the teams that will win aren't the ones who shipped the fastest prototype. They're the ones who asked who is actually going to use this, and what do they need to walk away successful — before a single component was placed.
That question isn't in your requirements doc. It isn't in your goal statement. It certainly isn't in your Copilot prompt.
It lives with your designer — someone who has spent years earning their instincts through iteration, failure, and the hard lessons that no model has ever been trained on. And it always will.
"AI is a powerful brush. But you still need an artist who understands what they're painting — and why the person on the other side of the screen needs to feel it."
