Your screen is the bottleneck
Most developers have AI coding tools. Almost none of them can see what those tools produce. The bottleneck is not the model. It is the four hundred pixels you have left to read the output.
I have spent months interviewing developers. Dozens of them. Competent, experienced, opinionated about their tools. Nearly all of them use an AI coding assistant — Copilot, Cursor, Claude, sometimes all three at once. And nearly all of them have the same problem, though most do not know it.
They cannot see what they are doing.
Watch someone work in Cursor. A chat panel on one side. A terminal at the bottom. A file tree lurking somewhere. And the code itself — the thing that matters, the artefact under judgment — crammed into whatever is left. Four hundred pixels of width, perhaps. Sixty lines of height. They are reading machine-generated output through a letterbox.
This is not a minor gripe about tooling preferences. This is the reason most engineering teams have seen no meaningful change since adopting AI assistants. The model generates. The developer squints. They accept or reject, but they do not review — not properly, not with the full shape of the change held in view. And code you cannot review is code you cannot trust.
The work has changed. When you pair with a language model, you write less and read more. You steer. You evaluate. You catch the subtle wrongness that a model will defend to the death if you let it. All of that is judgment work, and judgment demands space. You need to see one hundred, two hundred lines at once. You need the whole diff, not a scrollable fragment. You need room to think.
Your editor does not give you this. Most editors were built for an era when the developer authored every line. The interface is optimised for writing. Panels, sidebars, chat windows, embedded terminals — each one perfectly reasonable, each one cannibalising the space you now need most.
So developers generate more than they can comprehend. They produce at speed and review at a crawl, because their tools have left them no choice. Then they wonder why the AI hasn’t made them faster.
Here is what I do. A high-resolution display — 4K at minimum, 6K if the budget allows. Not ultrawide. Pixel density matters more than panoramic width. Sharp, clean text at a size you can sustain for hours.
Three vertical splits. Version control on the left. Code in the centre. Claude Code on the right. When I need to bear down on a single file, one keystroke and the pane fills the screen. Hundreds of lines. No scrolling. No squinting. No letterbox.
The rule is simple and non-negotiable: never accept what you cannot see. If a model hands you a two-hundred-line change and your editor shows thirty at a time, you are not reviewing. You are hoping. And hope is not an engineering practice.
Eighty to a hundred and twenty columns is the sweet spot for a single pane of code. Below eighty, you lose context. Above a hundred and twenty, the eye drifts. A 4K display will comfortably hold two or three panes at that width. That is enough. That is all you need to shift from hoping to knowing.
None of this is exotic. A decent monitor. A considered layout. The discipline to look before you accept. But almost nobody does it, because almost nobody has reckoned with what AI-assisted engineering actually asks of them. They bolted a jet engine onto a bicycle and wondered why they kept crashing.
This is the first in a series on what I have learned building production software with LLM-assisted workflows — the practices and decisions behind a measured tenfold difference in delivered output.
Next: why generating less per cycle is the most important discipline in AI-assisted engineering.
If you are a CTO, VPE, or engineering leader trying to work out whether your team is getting real value from AI tooling, I would like to hear from you. Find me on LinkedIn or at [email protected].