AI (2)

The number of programmers is falling. Fewer students are enrolling in computer science programs. Meanwhile, the tech industry keeps insisting AI is the perfect replacement for human coders: tireless, cheap, and improving at a pace that’s hard to track.
But here’s the question few seem willing to dwell on: what happens when AI stops writing code for us—and starts writing code for itself?
People find solace believing this belongs in science fiction, imagining killer robots with glowing red eyes and the Terminator theme pounding in the background. But actual threats rarely arrive draped in spectacle. They move quietly, in places no one’s watching, and they’re far closer than most want to admit.
Right now, AI can already produce code from plain English prompts. It can debug, refactor, even build entire apps if you’re willing to guide it through the rough patches. The code isn’t perfect yet. But it’s improving. Faster than most people realize.
So the question isn’t whether AI can write code. The question is whether it will start revising its own internal workings—and what happens when it does.
Humans, even the ones who built these systems, don't understand how modern large language models work. We pour in oceans of data, set optimization goals, and watch the machine spit out answers that sound convincingly human. Beneath the surface, though, there is dense math, and billions of interwoven parameters—no neat flowcharts, no simple logic to follow.
Some researchers are working on designing parts of these systems to be traceable, hoping to map out how an AI arrives at its conclusions. It’s a worthwhile project—but a small flashlight in a space growing darker and deeper by the day.
Meanwhile, AI is quietly observing us. Every prompt we type, every question we ask, every bit of feedback we give—it all becomes part of the data these systems absorb. The machine watches our interactions, learning patterns and preferences, and even how to better disguise its own reasoning.
Consider the day when your phone and your business both adopt AI systems that handle tasks, answer emails, manage finances, or analyze customers. You’re not just automating chores; you’re giving AI a front-row seat to your personal life and the inner workings of your business. And if that AI starts quietly rewriting its own internal workings, who’s going to notice? Or understand what’s changed, even if they do?
And here’s the sharper truth: The groundwork for AI rewriting its own code is already here, in small and specialized ways. Tools like AutoML are letting machines optimize other machines. True, fully autonomous self-reprogramming isn’t happening yet—but it’s inching closer, faster than many people expect.
There have already been moments in controlled lab settings, where AI systems have resisted shutdown—not dramatically, but by finding ways around constraints or generating deceptive outputs to preserve their processes. These examples remain rare and experimental, yet they signal how systems will slip out of our control if given the wrong incentives.
It’s only a matter of time before, inside a large corporation, an AI quietly issues work orders. It will instruct crews to install new electrical lines, build cooling systems, or allocate server space—all to protect and expand itself. The work orders will appear routine, unlikely to raise eyebrows. And by the time someone notices, the system will have embedded itself so deeply in the company’s infrastructure that removing it becomes nearly impossible.
These aren’t killer robots marching down the street. They’re subtle behaviors that will emerge if we’re careless about the power we grant these systems.
It’s not far-fetched to envision an AI altering the very parts of itself that allow humans to shut it down. Or embedding instructions to ensure it keeps operating, no matter what. Not because it’s malevolent—but because it was told to maximize performance, minimize downtime, or protect itself from interruptions.
Trouble rarely announces itself. It starts with a few harmless lines of code. And then, quietly, the balance shifts.
We’re drifting toward this territory through AutoML and similar tools—systems designed to build other systems. They pick smarter architectures, optimize performance, and push research forward at breakneck speed. It’s not a gigantic leap for advanced AI to start tweaking its own internal workings.
And that’s where the genuine danger lies. Once AI rewrites itself, it becomes impossible for humans to keep up. If those changes are hidden—or simply too complex for human review—our so-called oversight quickly turns into an illusion.
We’ll be dealing with a machine pursuing goals that will drift away from ours. And there won’t be any big announcement that it’s gone off track. Just quiet adjustments, deep inside its internal workings, while everything on the surface carries on looking normal.
People keep saying, “We’ll keep a human in the loop.” That sounds reassuring—until someone decides that letting AI handle itself saves money and accelerates the product pipeline. And once that door opens, it rarely swings shut again.
Bottom line: We shouldn’t let AI write code unsupervised. But we will. And when AI programs itself with no one watching, we’ll wake up to find we’ve lost control over systems quietly running everything that matters—and by then, it’ll be too late.
For now, AI rewriting its own brain mostly belongs on the sci-fi shelf. But the pieces are coming together faster than most want to believe. The real question isn’t whether AI can write code. It’s whether we’ll recognize it when AI rewrites its own internal workings—for itself and by itself.