The (R)Evolution of Computing: From Bare Metal to AI
My Journey Through 25 Years of Digital RevolutionSun May 18 2025
It began with a conversation between old friends at dinner—all of us engineers who’ve been building systems and applications since the late ’90s and early 2000s. One friend who has been experimenting with AI tools in his highly regulated workplace, was excitedly brainstorming ways to deploy them safely without compromising compliance. Another, fresh from a hackathon where they’d leaned heavily on AI-assisted coding, voiced a darker concern: “At this rate, will there even be a need for human engineers in the near future?”
That tension - between unbridled optimism and existential doubt - feels familiar. I’ve navigated similar technological crossroads throughout my 25-year career, watching each revolution reshape our craft while preserving its essence.
There’s a particular thrill that comes with working in technology across decades—the kind that only reveals itself in retrospect. My career began in 1999, at the dawn of what would become the open-source revolution, when installing Linux meant compiling kernels by hand and debugging hardware conflicts via serial console. Today, as I fine-tune LLMs with RAGs, investigate MCP servers, I’m struck by how much has changed—and how the underlying ethos of computing remains remarkably consistent.
The early days were tactile in a way that’s almost foreign now. Building a production server meant physically racking hardware, carefully selecting SCSI controllers, and tuning ext2 filesystems by hand. Linux, then still rough around the edges, offered something radical: complete visibility and control. You could strip it down to exactly what you needed, patch the kernel yourself, and understand every process running on your machine. There was no abstraction layer between you and the metal—just you, the source code, and the blinking lights of the data center.
The cloud changed everything. Suddenly, infrastructure became software-defined. Where once I’d spend afternoons crawling under raised floors tracing network cables, I could now provision entire fleets of servers with Terraform configurations. AWS turned capital expenditures into operational ones, and tools like Ansible and Kubernetes transformed manual processes into declarative code. The magic wasn’t just in the automation—it was in the reproducibility. Entire environments could be spun up, tested, and torn down with a few commands. The craft shifted from physical assembly to architectural design.
Now we’re in the midst of another sea change. Modern AI, particularly the open-model ecosystem, feels like Linux did in those early years—raw, powerful, and full of possibility. The parallels are striking: just as Linux democratized operating systems, open weights and permissive licenses are democratizing AI. Need to fine-tune a model for your specific use case? The tools are there, just as the kernel source was twenty years ago. Startups today can prototype AI applications in days that would have required months of infrastructure work in the past.
What’s most interesting is how each technological wave has preserved the hacker ethos while removing entire classes of drudgery. We’ve gone from manually allocating IRQs to systems that self-optimize based on workload patterns. The throughline is democratization: Linux put operating systems within reach of anyone willing to learn; the cloud did the same for infrastructure; now AI is doing it for capability - making what once required specialized expertise accessible through natural language interfaces and open models.
Yet some fundamentals remain unchanged. The best engineers still peel back abstractions when needed. Performance still demands understanding what happens underneath. And that essential joy of creation—of solving interesting problems with elegant systems—persists regardless of the technological substrate.
As I watch today’s developers build with AI, I’m reminded of the early Linux community—that same spirit of experimentation and open knowledge sharing. Yet legitimate concerns remain: the environmental costs of training massive models, the black-box nature of some AI systems, and the real possibility that these tools could displace certain development jobs while creating new ones. The most thoughtful engineers I know aren’t just adopting AI—they’re interrogating it, understanding its limitations as deeply as its capabilities.
This is why history matters. The lessons from our Linux and cloud migrations teach us that technological upheavals tend to be neither as apocalyptic nor as utopian as we first imagine. The developers who thrived through past revolutions weren’t those who resisted change or blindly embraced it—they were the ones who learned to harness new tools while maintaining their fundamental craft. Today’s AI revolution demands that same balanced approach: enough optimism to explore its potential, enough skepticism to use it wisely, and enough historical perspective to know we’ve navigated similar transformations before.
The tools will keep evolving, but the best engineers will continue doing what they’ve always done—peeling back layers when needed, solving interesting problems, and remembering that technology ultimately serves human creativity, not the other way around.