Here's the thing: I've been hearing this song for a while now, and it keeps not being true in the ways people predict.
We've Done This Before
When electricity spread, doomsayers warned that candlemakers, lamplighters, and streetlight operators were finished. They weren't wrong about the jobs. They were wrong about the story ending there.
Then came computers. In 1964, President Johnson commissioned a report warning of a "jobless future" from automation. Clerical jobs got automated. Switchboard operators, typists, keypunchers. And then computers created entire industries that didn't exist before. Same story with the internet. Same story with smartphones.
Every single time, the pattern held: yes, some jobs went away. More came back in their place. The doomers were right about the disruption and wrong about the destination.
I'm not saying AI will follow the same pattern automatically. But I am saying that "this time it's different, this time it's really over" is a sentence that has been confidently wrong about every major technology for over a century. The burden of proof is on the people making that claim.
A Quick Analogy That's Stuck With Me
When power tools became affordable enough for everyday use, carpenters weren't replaced. They got more productive. The carpenter who once spent a full day hand-sawing a kitchen's worth of boards could do it in an afternoon. The expertise that made a good carpenter good (judgment, precision, knowing the grain of the wood) didn't become less valuable. It got amplified.
I keep coming back to this every time someone in my feed declares that AI is about to eat the software industry whole.
What It Actually Looks Like in Practice
I've been using Claude Code as part of my daily workflow for a few months now. The headline: it's made me meaningfully faster on the routine stuff.
Not in a "now I don't have to think" way. More in a "now I can focus the thinking where it matters" way. The scaffolding, the boilerplate, the third time I've written the same type of auth middleware. Those go faster. What that actually means in practice is that by 5pm I've closed more tickets, chipped away more of the backlog, and still have enough brain left over to actually be present at home instead of mentally finishing a PR in my head during dinner.
That's not a small thing.
The Part Nobody Really Wants to Talk About
That said, and I think this is worth saying clearly, tools don't exist in a vacuum. They exist in systems, and systems have owners.
Here's a distinction that I think gets lost in the doomerism debate: there's a difference between imagined future harms and actual present ones. The superintelligent AI scenario is a possibility worth thinking about. But framing every AI concern as sci-fi extinction risk makes it easy to dismiss the very real problems already underway. The climate cost of massive data centers, the corporate exploitation of rare-earth mineral supply chains, AI-powered surveillance systems already deployed against civilian populations.
As one commenter in a thread I came across put it: it's like standing next to a giant pile of radioactive waste in a public park and spending all your energy arguing about whether it might one day take the shape of a T-Rex. Tomorrow's T-Rex is a possibility. Today's radiation is a reality.
The companies building and controlling these large language models are not neutral parties. Who controls the weights, who decides what gets filtered, who shapes the training data. These aren't technical footnotes. They're power. When a handful of organizations control the infrastructure that an increasing share of media, creative work, software development, and decision-making flows through, that's worth paying careful attention to.
I don't have a clean answer here. But "AI is a useful tool" and "we should think carefully about who holds the keys to that tool" aren't contradictory positions. Both can be true at once.
Where I Actually Land
The reality is probably going to land somewhere between the best and worst case scenarios. It usually does. Some of our fears will come true and some won't. Some things nobody predicted will happen, good and bad. That's not fatalism. It's just how technology has actually moved through history.
For now, in my day to day: yes, these tools are genuinely useful. They've made me faster without making my judgment less necessary. If anything, developing good instincts about when and how to lean on AI has become its own skill worth honing.
The carpenters who thrived after the table saw weren't the ones who refused it. They also weren't the ones who stopped thinking. They were the ones who figured out how to make the tool an extension of what they already knew how to do.
AI isn't coming for your job. Avoiding it while everyone around you adopts it might be a different story.
If we're all on a ship navigating some genuinely choppy water here, the answer probably isn't to stand at the stern and scream. Grab a beam and learn to work with the current.
Drop a comment if you've been wrestling with any of this. Curious where other people are landing.
Comments
I remember hearing a story about how labor unions resisted using the Phillips screws/driver since people were paid on the difficult task of using a flathead screw (which is prone to stripping too)
Also seems that all the peoples that are AI resistant and throwing all the doomsday shade are the one who intrinsically tied to the task that makes them valuable today