Your screen is the bottleneck
Most developers have AI coding tools. Almost none of them can see what those tools produce. The bottleneck is not the model. It is the four hundred pixels you have left to read the output.
Most developers have AI coding tools. Almost none of them can see what those tools produce. The bottleneck is not the model. It is the four hundred pixels you have left to read the output.
Language models can absorb and propagate team culture. Turns out that’s more interesting than asking them to write tests.
Ask a developer to estimate a task and they’ll think about time. How long will this take? A day? Three days? A week if things go sideways? This is natural, understandable, and the root of most problems with software project planning.
What follows is a personal philosophy built from first principles. It is not a manifesto. It does not claim completeness. It is the most coherent account I can currently give of what I think is true, how I think ethics works, and what I have chosen to value. Every component is provisional and I have tried to specify what would make me revise each one.
The World Health Organisation estimate at least 2.8 million people die each year as a result of being overweight or obese. Ischaemic heart disease is the number one killer world wide, and is associated with risk factors such as smoking, lack of exercise, obesity, high blood cholesterol, poor diet, depression, and excessive alcohol consumption.
Language models are articulate, confident, and utterly without the emotional feedback loops that make human workers ultimately accountable. They are, in essence, the perfect corporate psychopath.
The decision-making class are being sold a bill of goods. They’re replacing accountable humans with unaccountable systems, and they can’t tell the difference.
My favourite definition of money came from Dawkins who said money is a formal token of delayed reciprocal altruism. The idea that you helped my friend, he gave you a shark’s tooth or a shell or a clam, and now I’ll return the favour.
The discourse around AI fails to define its core concept before subjecting us to foundation-destroying cognitive contagions. We should start by defining what exactly we mean by “Artificial Intelligence”.
Technical debt is bandied about the meeting rooms and corridors of tech startups and large enterprises, used to describe everything from messy code to lax security practices. And yet, its origins and pernicious effects are misunderstood.
Computers are fantastically naive. They compute rather than intuit, they calculate quickly and even more quickly fail to understand one’s intent. If you’ve ever programmed a computer, you’ll have encountered how unforgiving they can be. Operating systems and programming languages are fastidious collaborators who’ll refuse to cooperate over the tiniest indiscretion.
The measure of a modern software engineer in our attention economy is easily calculable thanks to companies like Microsoft acting both as bastion and companion in demonstrating one’s prowess.
I was walking through a London park recently, and while looking at the sky I started thinking about the ceiling above us.
Code represented in textual form obfuscates connections that must be understood before one can manipulate with confidence.