
Three Years Of LLMs: The Run-Up, Realities, Risks, and Road Ahead
Introduction
Toward the beginning of my career, circa turn-of-millennium, I spent most of my time as a happy hacker writing artisanal procedural code in Emacs, pleased to push a few hundred lines of C, Perl, or JavaScript to nearby metal most days. This provided, in its best moments, delightful sensations of flow and mastery atop a greenfield (think writing your own object-relational mapper), and, in its worst moments, the maddening frustration of life in a cyber desert (think giving up on ever getting your Ethernet board to work under Linux because nobody wrote a compatible driver and Google doesn’t yet exist).
My 2000 era self would barely recognize 2020 vintage me and yet the pace of innovation felt incrementally digestible — the open source renaissance yielded a vast library of foundational components that needed research and integration, declarative languages like SQL and Terraform increasingly displaced procedural code, and the DevOps discipline blurred the boundaries of software engineering and systems administration, all while the arrival of containers, schedulers, hypervisors, and hyperscalers pushed everything toward the virtual.
As someone who came of age as the phrase “going online” went from meaningless, to meaningful, and then back to meaningless, and who has enjoyed an extremely diverse career across problem spaces, regulatory contexts, and team topologies during our species’ tech explosion, the continual chaos of my personal story perhaps positioned me relatively well for the LLM earthquake whose tremors we began feeling early 2023, and yet nobody stands ready for the world in which we find ourselves today, much less able to predict what will come tomorrow.
At the outset, highly repetitive marginally skilled work seemed the most amenable to automation, with call center operations held up as exemplar. High-skilled white-collar workers, meanwhile, breathed a sigh of relief, enjoying a smug sense of invulnerability that lasted until about when LLMs started passing bar exams like it weren’t no thing and analyzing images with such frightening competence as to place radiologists squarely in the cross-hairs.
Perhaps nowhere have we seen greater churn around both the perception and reality of potential applicability than in the area of programming. At first LLMs seemed to offer an interesting variation on a search engine, one that provided not simply a list of pointers into an indexed collection of source documents, but rather a search into the space of possible realities, rendered as the highest probability synthesized answer, albeit one woefully prone to hallucination.
Then, gradually, as core models improved, they created a world in which serious people began pondering the risk of eating our seed corn, as senior devs increasingly found asking an LLM for a snippet of code more appealing than hiring, training, and managing junior devs. Next, tools like Cursor and Claude Code began codifying agentic workflows, increasing the leverage to a degree that a single principal engineer could credibly match the output of a whole team in at least some contexts. We continue to this day to see a torrent of innovation around tooling and tradecraft that threatens to overwhelm the individual while upending whole ecosystems.
As the power that an individual can summon grows by leaps and bounds, the experience feels ever more like that of sorcerer and zookeeper than artisan and engineer, and the right approach requires a highly contextualized balancing of leverage exploitation against risk management. In a vacuum of real world encumbrances, the Agency of those who grok where this technology is going feels boundless, but Accountability to stakeholders, customers, and governments may limit what we can reasonably break or discard, Architecture rises to paramount importance as we attempt to manage complexity while constraining blast radii, questions of Autarky abound as we wrestle with problems as diverse as systems availability, skill atrophy, and adversarial action, and the problem of Alignment ever looms.
Agency
I remember well the earliest moments of bootstrapping the first big project of my government days — a gray-zone hard-scrabble where a nascent system, like a stowaway on a large ship, clung for life to a mod_perl route embedded in an Apache daemon living inside a blade server intended for someone else’s project, eschewing a DNS entry of its own lest it draw unwanted attention. What scant open source tooling was available to me consisted of the RPMs that came stock with Redhat Enterprise Linux or the occasional library arriving piecemeal through an excruciatingly slow acquisition security channel. About a year into the adventure I found myself giving a demo to the agency’s director which impressed upon me the adage that “there are only two tragedies in life”, whereupon receiving a truckload of cash to formally fund the project I spent the better part of the next year turning it into officially sanctioned equipment in racks with routable IPs and DNS entries, because government.
So imagine my relief when bootstrapping the next government project four years later when I found that I could do so atop a centrally managed VMWare cluster and point my package manager at a respectable internal mirror of third party repositories downstream of a rapidly flowering open source movement, allowing me to shed the burdensome accidental complexity of non-differentiating heavy lifting to instead focus on the essential complexity of my core problem domain. In lieu of fighting unwieldy funding obligation and acquisition security processes, to say nothing of writing extremely low-level networking and database code, I could now have someone click a few virtual machines into existence for me and then scaffold an application with Ruby On Rails.
Successive waves of innovation in the larger world continued to arrive, providing not just better practices for the specialist but also greater agency for the generalist, a phenomenon I wholeheartedly embraced. Having begun in a world where software engineers, system administrators, network administrators, database administrators, purchasing coordinators, and financial managers existed as discrete specialties with hard boundaries, I rejoiced in a world where the confluence of version control, containerization, schedulers, public clouds, virtual private networks, and infrastructure as code enabled me to be like — “Awwwwww yeeeaaaah! Who’s got two thumbs, a credit card, a generalist’s background, and a hunger to build? THIS GUY!” — and pull whole production grade systems into existence unilaterally while on a shoestring budget.
And so I have mad empathy for the increasingly minimally (or even non-) technical people getting excited by the era of LLM powered software development. Far be it from me to say “curb your enthusiasm”, but I am inclined to suggest moderation thereof. The range and combinations of software applications and operational contexts boggle the mind of even the most astute observers and by now the trope of laypeople marveling at a chatbot creating a “custom calendaring app” for them by copy-pasting a whole GitHub repo onto their laptop has grown quite stale. Yet the tooling does keep getting better, enormous latent demand for self-actualization exists in the world, and many use cases look like Bob organizing his recipes while not much caring about system outages, data spillage, contractual obligations, or compliance regimes. And even in more onerous contexts Bob may reasonably rapid prototype an app to run in a limited scope and thereby inform a more formal engineering process.
For all the FUD around the obsolescence of programming, we would do well to recall Jevon’s Paradox, a concept born of coal and steam but no less applicable to programming and LLMs — as the efficiency of a resource’s utilization grows, its aggregate demands often goes not down but up as previously uneconomical activities become practical. Along similar lines, I elsewhere some months earlier posited a Gibbs Window — the kinds of systems that can be built at the higher end and that will be built at the lower end as the result of the range of actors who can participate in some form of “programming” — noting that this window won’t simply slide but grow as we not only lower the barrier to entry but also smooth the learning curve, democratize access to knowledge, and provide unprecedented leverage to the highest tiers of experts.
The more high leverage our tools become at empowering individuals, however, the greater the risk that a person of limited experience will operate not on causal reasoning about systems but rather correlative observations in limited experiments, the essence of what Karpathy recently styled as “vibe coding”, but which has long existed as “programming by coincidence” or “cargo cult programming”. The dose makes the poison and as friction decreases we see people paying out increasingly generous amounts of rope with which to hang themselves. It is for good reason that we send military recruits to basic training on day one and only a handful of them ever end up in a squad of special forces operators, the cockpit of a fighter aircraft, or the captain’s chair of a capital ship — a phased progression that affords the individual the opportunity to gradually metabolize lessons around complexity and danger while allowing the system to filter and sort individuals in a way that gradually promotes them to positions of greater agency, and thus blast radius, in accordance with their demonstrated competences — an approach that we discard at our peril.
Not all self-empowering technology is created equal and with some arrivals we have observed substantial discontinuities. The creation of compilers, for instance, decoupled business logic from the vagaries of microprocessor architecture while leaving practitioners to still think rigorously about how their system needed to behave logically. With the rise of containerization, CI/CD pipelines, and infrastructure-as-code we have seen software engineers gain leverage by extending their domain’s rigor into an adjacent one. The swings of the pendulum from mainframes to PCs to commodity servers to public clouds to SaaS, meanwhile, were born of successive revolts against bureaucracy that led in turn to incomprehensible sprawl, cost proliferation, and perilous attack surface, as we remembered why we used to do things a certain way and rationalized that with new capabilities, only to over-correct and thereby foment the next revolt. Perhaps the advent of SQL, a declarative language for interrogating databases that hews close to English, in some ways presaged the LLM revolution to come decades later by so thoroughly separating desired outcomes from runtime behavior, if in a way where we at least agreed upon the input syntax.
Where LLMs really differ here lies in how they provide so much power to an individual to create superficially “complete” systems in which they don’t appreciate what shortcomings exist much less how or when they may materialize, resultant of an over-fixation on easily validated functional requirements (i.e. end-user observable features) to the detriment of non-functional requirements (e.g. security, scalability, maintainability). When appreciation for the situation does belatedly arrive in serious contexts it will likely come of embarrassing incidents, degraded performance, cost overruns, and stalled velocity — inter-locking problems that impact each other as one desperately attempts to escape a particular local maximum, not unlike the hapless database programmer who for want of a fundamental understanding of the underlying algorithms and data structures thrashes interminably between fixing one slow query only to drag down another.
How quickly the high of rapidly shipping code you don’t understand fades into an excruciating hangover as the bill for under-appreciated tech debt comes due…
Accountability
Some of the most meaningful and gratifying years of my career happened also to be among the most continually frustrating and terrifying. As an application that I had bootstrapped in support of a narrowly focused mission generalized into a broadly used platform, I felt the weight of design decisions made years earlier bearing down on me while a sales problem (“wouldn’t you like to use this?”) morphed into one of governance (“wouldn’t you please tell me before you use this?”).
Perhaps never have I so regretted a single design (non-)decision as allowing the underpinning PostgreSQL tables to use INT instead of BIGINT as the primary key type, necessitating a harrowing race to migrate a system supporting 24×7 real-time operations by running background conversion jobs on the hot database in anticipation of a zero-downtime cut-over. Perhaps never have I felt so sick as the day when surging mission workloads and a badly configured underlying SAN caused vacuum jobs to fall so far behind that the system wedged when it bumped up against a transaction ID wrap-around guard, necessitating my sleeping at the office to bring everything back online as soon as repair jobs had completed. But in these situations and countless others like them I lived the life of Extreme Ownership, relying on other people where possible but always acting as the final backstop, to include even debugging said SAN for the infrastructure org when they couldn’t figure it out themselves and it threatened to burn down the house.
Accountability creeps up on us in a variety of ways — warehouses of legacy data that must stay accessible, API semantics on which components come to rely, evolving legal and policy regimes to which we must adhere, funding constraints that we must navigate, safety and security risks we must address, promises we have made to customers and stakeholders about availability and performance, and so on. All of this and more, on top of every line of code in your system, plus every configuration someone mischievously clicked into a console instead of automating a process, represents the full “context” of the engineering problem, and any theoretical AI could only be as good as the completeness of the availability of this context to it, to say nothing of the practical limits on context size that modern models can handle, or the congruence of data on which it trained to the problems you have.
If the evolution of a critical system places it on a local maximum from which the latest AI cannot extricate it by painfully climbing down into the valley and up the next peak, what happens when none of the available humans understand the code base? If some devastating gray swan event strikes and the total context required to diagnose and repair the situation extends far beyond what the AI can access, how would an organization put out the fire without a human who is expert in technical forensics and can reach into every realm of knowledge no matter how disconnected and squishy? If a black swan arrives, will an AI system based on extrapolation from previous situations have any idea what to do? The closer the problem you are wrangling or the solution you are fashioning lives to frontier space, the more likely you ought adhere to the admonishment of a former co-worker of mine — “help is not coming”.
When one considers how technology systems, engineering teams, business processes, customer expectations, and stakeholder demands co-evolve over time, a scary and inescapable truth emerges around the convergence of certain parallel developments — the greater the age of a system, then the more diverse and valuable the customers who likely depend on it, the more complex its implementation and therefore byzantine its failure modes, the more obscure and numerous the external assumptions wrapped around it, and the fewer the original engineers that presently remain. Everything everywhere all at once… This naturally creates horrifying non-Gaussian distributions around the likelihood, impact, and complexity of scenarios which could leave an enterprise blind to certain risks until they are staring down a potentially company ending event and wondering what on earth they can possibly do to save themselves.
Early in the take-off of LLM-powered programming assistant technology, a swath of thinking predicted the death of SaaS, imagining that the leverage available from these tools would drive people in droves to in-house the solutions they had long contracted out to an assortment of vendors. People covered with scars from time spent in the enterprise trenches, however, knew better than to go knocking down Chesterton’s Fences without asking a few basic questions, and such predictions always seemed implausible to me for three chief reasons.
Firstly, companies engaged in such relationships aren’t simply trying to solve a problem superficially, but rather effect holistic liability outsourcing, relying on their counter-party being contractually obligated to cure any breaches as well as being likely to continue evolving and maintaining the product as the world keeps turning.
Secondly, for all the efforts of the very best humans and tools, complex distributed systems inevitably fail in maddeningly complex ways, and one’s confidence in them at the largest scopes and timescales rests not simply on logical reasoning but also in large measure on the statistical confidence that comes only of putting millions of miles on the code base in real-world use cases.
Thirdly, as more human experts attach to a code base, not only do you get more eyeball-hours hunting bugs per unit of functionality, but also enjoy greater fault tolerance in your ability to debug the inevitable bad outcomes despite unavoidable personnel churn. These firepower concentrating benefits naturally drive consolidation that makes liability outsourcing more attractive.
Every outsourcing decision represents a trade-off around leverage, economy, flexibility, risk, and, crucially, accountability. I think often of a time I placed a credit card order from Grubhub for four items, the restaurant had run out of one of them and wanted to issue me a partial store credit, the restaurant also then forgot to bag another of the items, then the delivery driver somehow only grabbed one of two bags, and so by the time my four-item ~$50 dinner order reached me (tip, tax, fees, and more fees) all I got was a sad little appetizer, and none of my four (!) counter-parties would take accountability to make it right. What business systems are you comfortable exposing to such piecemeal outsourcing?
Architecture
“Discipline equals freedom”, Jocko tirelessly counsels, seeking to impress upon us how doing the hard foundational work early and often liberates us to perform at peak both over time and during a crisis. Though formed in the crucible of military training and operations, the advice generalizes to all professions, and we would do well to heed it when attempting to responsibly deploy extremely powerful yet fickle technology in high stakes domains. While Architecture constrains, so, too, does it liberate when done right, ideally providing a framework in which to aggressively push the envelope while preserving Accountability to one’s disparate obligations and goals.
A regular theme in my consulting practice has involved parachuting into a situation where progress has begun to stall for a few reasons — infrequent releases owing to insufficient automation cascading into releases less frequent still owing to the fear of shipping weeks’ or months’ of work in one shot without robust testing faculties, debugging proving fiendishly difficult for want of telemetry while what telemetry does exist doesn’t easily relate to other events, insufficient isolation rendering too many operations a scary god-privileged proposition, far too much knowledge existing as tribal lore, and system instances looking more like cherished pets than commodity cattle.
Whenever remedying such a situation, or building a system from scratch, I first anchor to the goal of enabling zero-marginal-cost full-stack system copies. This approach begins by shaking loose the countless hidden assumptions that organically accrue as a prototype takes form, refactoring them into patterns that we can mechanize and parameterize, establishing schemas for the identifiers of systems and subordinate components to ensure their uniqueness while enabling resource discovery, and ultimately bringing into existence arbitrarily many copies of developer environments, testing systems, and production shards. I know I have succeeded when a new developer can safely and confidently ship a feature to prod on their first day alongside the continuous stream of PRs made by their teammates, everyone knowing a roll-back generally to be as frictionless as a roll-forward and that a mixture of rolling deployments and feature flagging prevent release pipeline traffic jams and catastrophic system wide failures alike.
With a foundation built on Repeatability we can continue up the hierarchy of DevSecOps needs to ponder matters in the realm of Observability. Now that you’re building runtime environments as container images and staging them in an artifact repository where one of the tags derives from the Git commit hash to form the contract between decoupled build and deploy processes, we can begin to leverage said hash in infra-as-code modules for compute nodes that stamp it into a canonical environment variable whence a telemetry library can source it and thereby enrich logs and traces that flow to a central location that underpins a single-pane-of-glass interface for monitoring, alerting, and debugging. Whew!
That sounds like it might be pretty nice, even in the old world. Now imagine a world in which a zookeeper human is attempting to wrangle not simply a small team of conventional engineers but rather a posse of AI-enhanced engineers plus semi-autonomous agents where the velocity of code creation is off the historical charts. How on earth will you keep your finger on the pulse, make decisions that judiciously balance operational risk and delivery velocity, and be able to debug, roll-back, and repair systems expeditiously?
As it turns out, all of these Repeatability and Observability investments yield higher order benefits in an LLM renaissance, doing so by increasing the surface area of your system with which AI can interact to design, test, and debug. If you’re firing on all cylinders in the Repeatability realm then an agent can pull knowledge of system infrastructure and runtime configurations into its context when generating code revisions as well as deploy production realistic clones in which to validate all of its work. Meanwhile, zooming out, really nailing your Observability story allows an agent, much like a human, to see errors in real time as well as reason about precisely when and how a defect made its way into the code base and thereby squeeze the possible explanations in a way that accelerates detection, diagnosis, repair, and hardening.
Of course, if we really want to move fast while staying Accountable, it’s not good enough just to be quick to detect and remediate issues. We will find some defects harder to repair than others (think data loss or corruption), certain kinds of incidents less reversible than others (think client data spillage or the theft of critical IP), and some workflows less tolerant of disruption than others (think transportation, manufacturing, finance, medicine, emergency services, law enforcement, military operations, intelligence gathering, etc). If you want to turbocharge your delivery velocity with AI, you live in a world with limits on your Fuck Around Find Out mandate, and you accept the reality that test coverage is always imperfect, then you need to climb higher still up the DevSecOps hierarchy of needs, past Repeatability and Observability, to the heights of Minimalism.
The realms of concern and attendant approaches around Minimalism number several but all center on either reducing the attack surface of components to make it harder to gain entry, constraining the lifetime and tooling of such components to render toeholds more cumbersome and slippery, and tightening the blast radius of such components by limiting what neighbors they can reach, interrogate, and manipulate. None of these matters are new, but they are all newly urgent with the rise of LLM generated code and prompt injection risk, requiring shrewd mitigation strategies are we to grant greater autonomy and scope to a burgeoning digital workforce without unduly throwing caution to the wind.
If we strive to build container images not just repeatably but minimally, mount them in narrowly scoped task definitions with read-only file systems, expect the scheduler to inject short-lived and minimally permissioned tokens at runtime, attach those run-times to appropriately locked down network segments, tear them down with some regularity, and persist their logs to an immutable external store, the benefits will prove legion. We are thereby not just engaged in long agreed security best practices that each provide value on their own as well as serve as interlocking and compensating controls. We’re also broadly shaping an ecosystem that at least partially and statistically mitigates the risks of bringing increasingly stochastic development processes and operational components into our ecosystem.
Better still, these pushes toward Minimalism, rooted in safety and security ends, feed back into general operational and financial considerations, creating a virtuous cycle. As AI accelerates your delivery velocity, your runtime environments will naturally experience higher churn as you ship new revisions with increasing rapidity, incentivizing minimalism for components not just to reduce danger but also startup times, achieved by both limiting churn to componentry that actually changed and ensuring that fresh instances reach a ready state as fast as possible. Meanwhile, as AI accelerates the uptake of your products, inefficiencies around unit economics will become intolerable sooner, the salve for which, again, looks like minimalist stateless components that an orchestrator can scale independently and elastically with the greatest rapidity and least cost.
And, oh, have you heard that LLMs do better the tighter the scope of the request and the more explicit the goals and constraints? Minimalism to the rescue again! The cleaner your architecture owing to rigorous separation of concerns, and the clearer the contracts between components, the better an LLM of a given sophistication will do when asked to implement a feature, on both the axes of correctness and cost. So think hard about those module boundaries, explicitly document and enforce their purposes, interfaces, and invariants, keep growing the test suite, and leverage your highly repeatable system deployment mechanisms to make your testing as realistic and isolated as possible. The more your system looks like a strand of pearls than a blob of spaghetti, the happier both humans and AIs alike will be.
You may have also heard that AI is expensive, not just up front but also, in many use cases, in an ongoing way, contributing not just to the CapEx of R&D but also the OpEx of servicing customer workloads, leaving many AI-enabled products with a cost profile that more resembles telcos than traditional SaaS and that sometimes results in a product people enjoy using but that you can’t afford to operate. Unit economics and cost allocation thus prove standout concerns and Observability comes to the rescue — telemetry that feeds metrics, monitoring, alerting and billing will prove crucial to workable FinOps as well as general system efficacy.
Changes to model versions, system prompts, and data formats may dramatically affect not just the speed and quality of queries but also cost, necessitating the ability to attribute such downstream changes in behavior to legible upstream changes to inputs and configuration, ideally with identifiers that point to specific tags on code under version control. Different customers may also impose wildly different loads on your system which may translate directly to dollars in your AI vendor bill and so you had best be able to perform tenant-level cost allocation, at least to inform your subscription tiers, and possibly even to govern a metered chargeback model.
Lastly, we should briefly divert from matters of general system architecture to application specific choices, remarking on the implications of where within the SDLC and data flows we choose to involve stochastic mechanisms in preference to deterministic ones. Such decisions may dramatically impact any or all of developer velocity, unit economics, system reliability, and customer happiness. And, interestingly, all such choices may in part hinge on one half of the purportedly two hard problems in computer science — caching.
Consider first one of the earliest business domains to find itself in the cross-hairs of LLMs, that of call center operations. Many companies rushed to squeeze costs by replacing humans with bots, resulting in the horrendous Enshittification of customer support we all experienced, often with outcomes so heinous as to foment public apologies and process regressions.
Far better at the outset, if you wanted to experiment with AI without jeopardizing customer relations, would have been not to interpose an LLM as a gatekeeper between customer and company but rather to tee the data of existing human-to-human interactions and use that to recommend escalations, detect patterns, assess performance, and, perhaps most crucially, formulate and maintain run-books.
Consider second the challenge of integrating LLMs effectively into data engineering pipelines. Specifically, consider the grubby business of data ingestion, one of the most continually frustrating parts of managing a highly integrative system, owing to the lack of coordination with and fundamental volatility of the outside world.
The time honored legacy approach looks like tasking a human to create a new version of a parser every time ingest breaks or, on a good day, when documentation is kind enough to warn of impending doom with sufficient lead time that you can craft a new version of your parser with forward compatibility.
A naive approach at AI-enablement — costly, slow, and fragile — might be to repeatedly hand an LLM a piece of data with a system prompt that includes a target schema and instructions to make a best effort to map the data into it.
Better still would be an approach that leveraged caching, buffering, system monitoring, intelligence gathering, and re-processing — have an operational parser of the deterministic variety, trap parsing failures and route problematic data to a queue, have an LLM monitor that queue and suggest a PR that would heal the parser, and finally flush the problem queue back to the main queue for processing (and for good measure have another background LLM reading docs and suggesting proactive fixes).
In the year of 2026 within the realm of AI-fueled systems development, architecture will be king.
Autarky
If you live on planet Earth, you have likely had your life disrupted on several occasions by an AWS S3 outage that simultaneously affected hundreds of millions of other people, though you may not always have known the root cause. When CrowdStrike pushed a bad plug-in to myriad computers without a phased deployment, bricking countless mission critical systems without warning, you likely at least experienced ripples from that shockwave, and quite possibly much worse. This ought scare people more than it does, and we should strive to build a world more resilient to such failure modes, but we must also take care to distinguish between different categories of third party risk.
Let’s call this one Operational Reliability Risk — your counter-party broadly aims to do right by you and even bears some manner of contractual responsibility but shit happens.
If you were an SMB selling to consumers circa ~1995, you probably had some phone lines and PO boxes to collect orders, you might have begun dabbling with a website as another such conduit (if just to route people to the former), and most of your advertising flowed through an assortment of print, radio, and television channels with whom you explicitly contracted guaranteed placement. By ~2005, though, you probably had a web site that could complete purchases, and you were likely directing traffic there with organic search results and/or advertisement impression auctions with Google. Many such businesses saw their volume explode during this marketing renaissance in the run-up to Google’s IPO. And then, some random Tuesday, your order volume went to zero. Was your website down? No, it was up, but traffic had cratered because Google changed their algorithm. Just like that, life as you knew it was over, and by now going back to the old model probably wouldn’t work either.
Let’s name this category Relationship Fragility Risk — your counter-party may remain generally aligned with your class of entity but have no contractual obligations to any particular member, creating a game of rug-pull roulette.
Many traditional media companies, meanwhile, experienced a delightful boost to traffic and revenue around that time. Google lacked any content of its own and so formed a symbiotic relationship with the traditional media outlets that had gone online. They indexed the web and drove traffic to third party sites, those sites embedded inventory from Google AdSense, the two parties shared in the ad revenues, and the media companies grew their following.
In doing so, however, many media companies pivoted from explicit contracts with paperboys like me to an implied relationship with Google who directed traffic to their sites, in time dropping subscriber paywalls in preference to paid ads, an approach that felt individually rational in the arms race to capture eyeball-hours while being collectively suicidal as monopoly power consolidated in the broker.
As Google coaxed the media ecosystem open only to emplace itself as the gatekeeper, it created the means to capture more of the total value for itself, doing so by increasingly repurposing third party content instead of directing traffic to the source, thereby reserving more of the ad revenue pie for itself by retaining eyeballs on its own property, a trend that continues to this day with Gemini’s “AI Overview”.
Let’s name this one Incentives Drift Risk — what starts as a symbiotic relationship between a rising power and a class of entities morphs gradually into an increasingly extractive relationship as monopoly power consolidates in the former while members of the latter, now beholden to the former, find themselves at the start of an arms race with their peers.
Zooming out to longer sweeps of time, we observe a media ecosystem not merely beholden to a small number of eyeball-hoarding gatekeepers, but rather one fundamentally altered by them, done so in a recursive manner that inexorably ratchets up the pressure on the entities that produce content while sculpting the human brains that consume it. You generally can’t feel the changes from day to day but when you look back to 2005 you can’t help but feel like you’re living on a different planet.
Facebook started by providing a fairly predictable feed of updates from your direct connections, evolved into curating those friend updates with increasingly sophisticated machine-learning algorithms optimizing for engagement, and over time tilted toward an infinite scroll filled with third party entertainers, influencers, and advertisers. Practically speaking, this means your friends went from directly engaging you with phone calls, text messages, emails, and meet-up proposals that risked explicit rejection, to outsourcing those interactions to an algorithmic distribution platform that pitted them against each other in an unprecedented popularity contest, and ultimately to ending up in a hopeless fight with an endless firehose of video clips from randos of car crashes, cuddly cats, pretty people, political polemics, and conspiracy kookery.
The algorithm learned that off-site links reduced total engagement and so deprioritized them. Traditional media responded by creating derivative content, hoping to receive a slice of advertisement revenue, and resigning themselves to only directing a tiny slice of traffic off-site. The algorithm learned that text longer than a phone screen reduced the opportunity to interleave ads and began punishing long-form posts. Traditional media responded by squeezing the text and hooking attention more superficially, spending an increasing fraction of their efforts on engineering punchy blurbs and scroll stopping images. The algorithm learned that the most engagement inducing emotions were fear and greed, and so began deprioritizing posts that didn’t stoke anger and jealousy. Traditional media responded by crafting hyper-partisan short-form political video content to compete with the slop.
Attention spans eroded, decision windows collapsed, content became correspondingly sensational, brief, and shallow, and our feeds became hyper-individualized, all while our friends faded out of view — flick, flick, flick, pause, flick, flick, flick.
We will call this one Ecosystem Drift Risk — over time the powerful gatekeeper doesn’t merely abuse its growing monopoly power to extract increasingly onerous rents from one or both participants in the mediated transactions but rather begins to force fundamental changes on both of the brokered parties that completely changes the experience in ways that optimize profit while degrading quality.
Data is the new oil, AI the means to exploit it, consolidation the way to hoard it, and (for now) attention the ultimate product. This, too, however, can change as we continue to experience Ecosystem Drift both recursive and successive. We would do well to carefully examine the lessons of the past thirty years when deciding what to outsource to LLMs and how that will govern not just our present experience but also shape our future world.
I have stayed close to the programming-assistant LLM tool-chain over the last three years, enjoyed a great deal of leverage by leaning into its strengths, experienced its behavior as extremely unreliable on multiple fronts, and felt with some discomfort how it is rewiring our brains. The only universal advice I can reasonably offer people consists of encouraging them to stay engaged while urging caution around developing dependency — you’d be crazy not to use it, foolish to lose track of it, reckless to rely on it, and naive not to distrust its purveyors.
The wise practitioner will consider all of the aforementioned risk categories when making decisions in this realm around what to retain, what to outsource, and where to hedge on all things LLM. Let us consider each in turn while contextualizing to the different facets of the software development life cycle.
On the matter of Operational Reliability Risk, we need to consider “operations” on multiple fronts — the development of systems, their debugging, and their production operation, as well as the decision to outsource. Baking the invocation of SaaS LLMs into your production flows involves the greatest immediate risk and so must be done with great care, ideally limited to time-insensitive and/or best-effort functionality, and backstopped accordingly with explicitly contracted SLAs. Using LLMs in their development, meanwhile, may be the lowest risk area in the short-term, though only up until the point it spills into debugging, and in the long-term may experience some contagion effects from Ecosystem Drift Risk. Keep ever in mind how scary it would be to debug a production system incident coincident with LLM non-availability, and retain human expertise as necessary, both general ability and local knowledge, to keep degradation graceful instead of catastrophic. Consider whether outsourcing certain functionality to vendors with a concentration of human experts may be safer than operating a bespoke internal service that relies heavily on a small number of humans who are possibly over-leveraging AI.
In the realm of Relationship Fragility Risk, we must remember that querying an LLM for any purpose represents just about the most squishy API contract one could possibly imagine, essentially nothing more than “bits go in and bits come out”, an incredibly scary thing when accepted in concert with a black-box back-end subject to unilateral and unannounced changes by its owners and strained by the uncoordinated competing workloads of unknown others. A service provider may remain generally aligned with entities of your kind, and yet have no real understanding or particular concern about your own well being, only the viability of your cohort in aggregate. This may leave positively giddy today-you, particularly if you have inlined LLM invocation with mission critical production workflows, subject to potentially company ending tail risk wherein the ground shifts beneath your feet, as disruption could render your business unviable overnight while others shrug and be like “works in my microverse”. This kind of risk also threatens system development and debugging albeit in a more recoverable way unless your engineering team’s human capabilities have severely atrophied.
With Incentives Drift Risk you must remain wary both of imperceptibly slow asphyxiation and violently dramatic upheaval. Consider this past summer’s soap opera centered on the AI-powered code editor Cursor when the company changed the tool’s operational behavior and charge model in a way where countless users went from delighted customers to furious ones overnight in a rug-pull so severe that breach-of-contract claims began to fly. This settled down eventually and yet the big bang, though tamped down, portends a slow choke. To the extent that we developers presently enjoy the leverage of LLMs, to what degree does venture capital subsidize it in a manner that may prove unsustainable, and consequently what risks exist around our being treated as disposable booster rockets in service to some larger and shifting objective that we don’t fully understand? We may feel this risk most viscerally when building systems but their long-term operations in fact home the most existential risk — the more the FinOps of AI-era SaaS companies look like those of telcos than traditional SaaS owing to continual token burn, the more those companies put themselves at the mercy of fickle upstream vendors dealing with similar financial pressures.
Finally, with LLM-driven Ecosystem Drift Risk, we find ourselves pondering the most nebulous and existential problems, areas where indirect effects, negative externalities, and coordination problems dominate. Anyone of programming expertise prior to the LLM boom who has begun incorporating them into their daily workflows, and who has even a modicum of self-awareness, has felt the risk of skills atrophy, a phenomenon eerily similar to what happens to aircraft pilots who rely excessively on auto-pilot systems, though more exaggerated. You lose what you don’t use, but whereas with pilots this manifests mostly as catastrophic non-readiness in emergencies, programmers will not only experience punctuated moments of disastrous non-readiness but also failures that more resemble slow decay if system architecture and hygiene increasingly fail to receive their due. And these individual failures of discipline and readiness in time compound into aggregate deficiencies in the biomass of programmers available to the world’s engineering projects, both by tampering with today’s experts as well as choking off the supply of new programmers if opportunities for people to get onto the first rung of the ladder disappear, and you can bet that the big players in the ecosystem will adapt accordingly.
And so we must continually re-assess the balance of leverage, flexibility, reliability, and autonomy in all manner of outsourcing choices, a challenge that was always with us but that has grown enormously difficult in the earliest days of what may prove humanity’s last invention. Nearly thirty years on I still love Emacs, derive great power from it, and am grateful for the work of the plug-in writers who have continued to make it a viable tool (e.g. gptel, copilot, and eglot), but with every passing month my usage therein nonetheless looks more like architecture design and code surgery than anything like the programming of yesteryear, with more and more of the actual smashing out of characters being done by LLMs, whether ChatGPT, Claude Code, or Cursor, depending on the moment. I religiously refuse to push code that I don’t understand, worry about the long-term risk of thinking I understand when really I don’t, and am amazed by LLM-powered VCS plug-ins that occasionally catch complex and obscure issues that I missed despite my best efforts to combine human and robot on the creation side.
Weird times.
Alignment
Moments after ChatGPT landed we found ourselves navigating apocalyptic ideation around runaway super-intelligences. We absolutely should worry about that class of scenario, but we fixate on it to the exclusion of others at our peril, for the challenge of alignment involves innumerable contests at each of many scales that span decades. Wisdom looks like drawing lessons from the previous chapters of the information economy explosion while remaining open to the possibility that new classes of problems may emerge that could prove as surprising as they are intractable.
I recall circa 2005 bootstrapping a personal project to scrape, summarize, and alert on traffic data from sensors deployed throughout Maryland by CHART, the Coordinated Highways Action Response Team, so I could peruse a summary web page on an unclass terminal before departing my SCIF to brave DC-area traffic and enjoy SMS updates on my simple if stalwart Sony Ericsson phone once in my car. The impetus for, and implementation of, such a solution was very much a product of the times — Linode launched in 2003 which afforded me a Linux VPS on which to host a Perl script invoked by cron, Apache for a website, Postfix for email, and BIND for DNS, in a world where carrier-provided email-to-SMS gateways were state-of-art and the iPhone and Google Maps’s traffic layer wouldn’t arrive until 2007. The life of an Internet plumber in that era was good grubby fun.
I revisited this project in the early 2010s as an excuse to learn some AWS and maybe create something more broadly useful. Quickly, however, I found myself on an uneven playing field when using CHART as my data provider and lacking a good alternative. With the arrival of the iPhone, Android, and the traffic layer of Google Maps, every driver with a smartphone had become a volunteer participant in a massive and self-reinforcing mobile sensor network owned by Google, the intelligence-producing and competition-crushing power of which we had not seen since railway era robber barons. When Snowden busted onto the scene in 2013, causing a huge public backlash against government surveillance, I just shook my head at the sad realization that, though an important conversation, it would ultimately feel like a “fighting the last war” kind of moment as the world underwent tectonic shifts.
How did we transition so quickly from an open web, birthed largely with public funds, in which anyone could participate on a roughly equal footing, to an Internet dominated by a tiny number of data-hoarding gate-keeping proprietary platforms? To make sense of it, let’s travel back farther still to explore the inter-locking and self-reinforcing trends around technology, finance, and intelligence.
Around 1995 I thrilled at the opportunity to upgrade my childhood Windows PC from 8MB of RAM to 16MB as it unlocked the multi-player joy that came of being able to run Mech Warrior 2 and MPlayer at the same time while connecting out over my 28.8k baud modem. I think we also upgraded to a 512MB spinning platter hard drive around the same time. Fast forward to the present day and I’m typing this on a MacBook outfitted with RAM measured in tens of GIGAbytes and a disk sized in TERAbytes, I’ve got a phone in my pocket that isn’t notably weaker for most applications, both of these devices cost roughly the same as my thirty-years-ago PC in nominal dollars, and they’re both riding on a gigabit network connection. Yes, this is pretty sweet as a consumer, but imagine now the phase change such plunging unit economics would trigger in the production, capture, storage, analysis, and exploitation of telemetry by gatekeeper companies riding atop a venture capital boom.
Actually we should really go back farther still to understand the origin story of our modern information economy and how it resembles a massive flywheel that began spinning up nearly a century ago. We don’t need to rehash every detail of WWII but we should briefly remark on the hinge point represented by Dr. Alan Turing’s mass mechanization of information exploitation in the forties with the proto-computers used to scale the cracking of Axis-power ciphers. From there, owing to the shared DNA of intelligence and finance problems, the computerization of securities exchanges in the fifties and their networking in the subsequent decades seems like a foregone conclusion, eventualities that at once placed transactions on observable rails and allowed sophisticated systems builders to accumulate massive wealth as they leveraged centralized communications networks to exploit information asymmetries at unprecedented scope, scale, and speed.
The logical progression that followed included ARPANET in the sixties, TCP/IP in the seventies, the Internet in the early eighties, and then (as just one exemplar branch) DE Shaw in the late eighties, Jeff Bezos emerging from there to found Amazon in the mid-nineties coincident with the proliferation of low-friction digital payments, and by now an umbrella company that boasts its own digital store front, runs highly roboticized warehouses, fulfills orders with its own shipping fleet, issues credit cards, and operates the dominant public cloud (and, for good measure, owns grocery stores as well).
As we attempt to make sense of the evolving incentives structures governing the alignment issues of the last quarter century, they appear to divide into a few clusters of secular trends that both build internally while overlapping, interlocking, and reinforcing.
First, stacking atop the earlier digitization and networking of all manner of correspondence and transactions, the plummeting unit costs for hardware coupled with ubiquitous broadband Internet access incentivized the instrumentation and recording of everything, while network effects and capital consolidation fostered an ever smaller number of ultra-resourced, chokepoint-holding, data-hoarding, wizardry-wielding, surveillance capitalism behemoths.
On that thread — Google’s 1998 founding marked the rise of the first algorithmic Internet gatekeeper, Facebook’s founding in 2004 and Twitter’s in 2006 fomented an era of continual micro-blogging, Apple’s 2007 launch of the iPhone cemented our “always online” way of life, Apple’s 2008 launch of the App Store enabled the delivery of hyper-instrumented thick clients, Facebook’s introduction of the Like Button and Google’s purchase of reCAPTCHA in 2009 enlisted everyone as volunteers in a data tagging army, and the iPhone’s 2010 addition of the front-facing camera turbocharged a frenetically competitive era of low-friction, high-anxiety, content-rich journaling.
Second, a growing pressure to monetize, enabled by the rising sophistication of Machine Learning algorithms and coincident with nascent desperation in the general population, drove companies to optimize for engagement while honing micro-targeted advertisement, resulting in the prioritization and thus proliferation of short-form content hyper-tailored to an individual’s interest that often favored the inducement of anger and jealousy, the tertiary consequence of which being a population of waning financial viability using their collapsing attention spans to purchase influencer peddled products with predatory BNPL and chase get-rich-quick schemes while their political views grew increasingly partisan.
To wit — the 2008 CDO-fueled global financial crisis gave way to the ZIRP-era induced VC bonanza, the 2009 creation of Bitcoin, the 2013 founding of Robinhood, the 2017 proliferation of ICOs, the founding of Kalshi in 2018 and Polymarket in 2020, the 2020 COVID-era shutdowns and relief checks that fueled speculative booms in financial markets as desperate gamblers re-routed from sports betting to crypto coins, meme stocks, and prediction markets, the 2021 peak market valuation of Klarna at $45B, the 2023 graduation of a cohort of “Zoomers” who started college only to be blindsided by COVID and finished it only to be disrupted by LLMs, and finally a 2025 Trump Executive Order that facilitates access to cryptocurrency in 401(k) plans.
Third, government completely lost the initiative, owing to a variety of coincident trends — the speed of technical innovation accelerated away from their capacity to regulate, increasingly partisan political parties with narrow mandates fomented rapidly inverting regulatory regimes, and growing capital pools amplified the power of private companies to engage in regulatory capture through lobbying while developments like crypto-currency and Citizens United liberated and obscured the wielding of such power while encouraging corruption.
Consider then how — the 2010 flash crash in the stock market revealed that algorithms had taken over the financial world, the 2010 Citizens United v. FEC decision compounded with the creation of crypto-currency to form vast dark pools of highly influential capital, the 2011 stand-up of the CFPB plus the 2015 enactment of the net neutrality order plus the EU’s 2016 adoption of GDPR felt like locking the barn doors long after the horses were gone, the 2017 founding of Clearview AI and the 2018 brouhaha with Cambridge Analytica amply illustrated this reality, the 2022 meltdown of FTX and 2023 collapse of Silicon Valley Bank revealed that regulators still didn’t have their arms around the financial system, and the 2024/2025 founding of World Liberty Financial by Trump family and friends, its minting of the USD1 stable coin, and the subsequent commitment of Abu Dhabi-backed MGX to buy $2B worth of it (to say nothing of the $TRUMP coin and “Liberation Day” tariffs) made plain the obsolescence of The Emoluments Clause.
Fourth, as intrinsic government power waned while Big Tech grew ever more potent, the temptation for the former to lean on the latter became irresistible, resulting in not just substantial intelligence gathering enablement but also an assortment of information wars that took the shape of content moderation, thought sculpting, deplatforming, shadow banning, publication timing, and identity laundering, the further consequence of which being rising suspicion that the whole maddeningly illegible system was rigged, which in concert with mounting desperation and partisan sentiment has driven people to populist poles and seen the election of a president with unitary executive leanings and a general disdain for institutions.
Along those lines — the 2010 Arab Spring followed by the 2011 crackdowns revealed how authoritarian governments could unilaterally scrape social media platforms to hunt down dissidents, the 2013 Snowden dump painted a picture of private companies consenting to enabling intelligence gathering operations, the 2019 paid pilot between ICE and Clearview AI placed that reality in plain view, the war over content policing (often government induced) reached a fever pitch in 2020 during COVID, the delay of revelations about Hunter Biden’s laptop around the 2020 presidential election sparked furor over Big Tech’s active meddling with political outcomes, the deplatforming of Trump in January 2021 after the J6 events drove home the extent of Big Tech’s powers, Musk’s purchase of Twitter and subsequent dump of internal memos in 2022 fanned the flames that fueled a Trump resurgence, the 2024 quadrupling of Palantir’s stock told the story of an intelligence privatization bonanza, and a 2025 Trump 2.0 that included a Musk-led DOGE and a Russell Vought-led OMB accelerated a hollowing out of government.
Finally, with the release of ChatGPT about three years ago, we democratized the use of Generative AI and kicked off a global arms race, the result being a growing distrust of all information, a creeping anxiety about labor displacement, and an all-consuming existential struggle between governments and corporations that is dominating the conversation and economy alike.
And so now — nearly every Big Tech company is making an enormous GenAI play, the data center and energy production build-outs are dominating GDP growth and stock markets, every player in both the corporate and national space feels the pressure of an existential competition, public/private initiatives are becoming ever more entwined in a Manhattan Project-esque fashion, bizarre cyclic investment schemes between large companies are inflating valuations, the picks-and-shovels NVIDIA became the first company to surpass a $5T market cap, commodity Deep Fake generation faculties have begun drowning the Internet in content often indistinguishable from the real thing, and everyone paying attention to the vibe and/or already experiencing pocketbook pressure feels a sense of impending doom.
This human-and-cyber hairball is the backdrop against which we must ponder so-called “AI Alignment” issues… and you were worried about being killed by Skynet?
Well, you should be worried about that, too, just not just that. Save some mind-share for the less fantastic yet no less dystopian world of AI-empowered oligarchs wherein we learned in a leaked internal memo in 2018 that as far back as 2012 Zuckerberg was remarking that what’s good for the world is not necessarily what’s good for Facebook.
I find myself analogizing the present AI-takeoff moment to the boundary of the two distinct chapters of nuclear weapons development that divide roughly into Richard Rhodes’ book “The Making Of The Atomic Bomb” and Eric Schlosser’s subsequent “Command And Control”, the former of which focuses on the journey that culminated in the Trinity Test in the US southwest and the ending of WWII with the bombing of Japan’s Hiroshima and Nagasaki, and the latter of which focuses on the subsequent “productization” of the prototypes, a tale of scaling up the manufacturing of inputs, refining and diversifying delivery platforms, shaping operational doctrines, and ultimately proliferating the capabilities to a growing number of nations.
The first bomb going bang in the desert feels akin to that moment with LLM’s where we realized that adopting a particular approach and then just scaling up the number of parameters to a particular threshold allowed for magic to manifest, akin to the “critical mass” required for atomic fission, and vindication of the earlier assertion that “Attention Is All You Need”. We should also note here that, at the time of that first test explosion, the scientists on hand were still not entirely sure that they wouldn’t ignite the atmosphere and kill all life on earth… but they did it anyway.
And, yes, we collectively saw fit to keep developing better “models” of nuclear weapons, now sporting ones orders of magnitude more potent than the ones that ended a war, but the story of the last eighty years in that realm centers not so much on how big of a boom any given bomb could make but rather the unbelievably complex web of events set in motion by such an existentially threatening weapon. The US and USSR spent forty five years in a Cold War during which a mix of brinkmanship, accidents, and misunderstandings had us continually courting catastrophe, all while the military-industrial complex that Eisenhower so feared grew inexorably. The subsequent collapse of the USSR triggered a proliferation crisis, the nascent retreat of the US from its role as global hegemon now threatens yet another such crisis, and even just a relatively minor mishap or misadventure with a single localized detonation could trigger a mass-poisoning or even climate-changing event from the fallout.
But at least the deterrence power of nuclear weapons put an end to great power conflict, right? Riiiiiiight? Evidently not — and the current Russia/Ukraine conflict is only the latest reminder that we have simply pivoted from well-bounded, formally declared, conventional wars to wholly undeclared, largely unconventional, unending proxy wars. In fact nuclear weapons, in concert with a host of other technological innovations, made conflict more expensive, expansive, continual, opaque, and unaccountable — a veritable thicket of fuzzy incentives with poorly understood alignment.
But we survived that, right? We always figure it out, don’t we? True, sort of, so far, but sometimes “figuring it out” has involved killing tens of millions of people at a go and destroying most of the world’s infrastructure while we’re at it. We would further do well to recall the Anthropic Principle, also known as the “observation selection effect”, and thereby accept the difficulty of reasoning about the likelihood of observing a succession of outcomes when such observation requires your survival, while also keeping in mind that each of our successive dice rolls takes place in an increasingly connected world sporting ever more powerful technology which in concert makes the planet feel ever smaller.
We often talk about “AI Safety” and “Alignment” as if we can run a test suite in a sealed lab, validate that a model behaves, and then release it into the wild, but that is preposterous fiction…
Firstly, it assumes that paltry humans possess the intellect to understand a super intelligence well enough that it could not successfully conceal its true nature or somehow incept humans across a divide, but that isn’t even the most immediate gnarly problem.
Secondly, it assumes that a model once released in the wild couldn’t find a way to engage in recursive improvement and cross some new “phase change” type threshold, but that also isn’t the biggest immediate risk.
In fact, as scary as those scenarios are, the most pressing risk centers simply on the proliferation of current models exactly as they are, augmented by a growing collection of “delivery systems”, deployed into a world already as incomprehensibly complex as previously enumerated, and implemented by a species that couldn’t even figure out how to manage “climate change” in the atmospheric domain, to say nothing of the sociopolitical and cyber ones.
I fear that our fixation on the atmospheric subclass of “climate change”, though real and worthy of our attention, has blinded us to much broader and more pernicious trends. I worry that metaphorical “lead” has leached into the metaphorical “water” and that we are struggling to perceive it not just because the changes have arrived so gradually but also because their effects have degraded our capacity to explore autonomously, consume skeptically, argue shrewdly, design carefully, and operate robustly, all while fragmenting us into increasingly hostile tribes more readily exploitable by technology-enabled demagogues who stoke fear, promote division, and encourage transactionality. Perhaps before we try to argue with our neighbors we should instead swing our focus upstream. To riff on a famous synthesis from an earlier epoch — it’s the algorithm, stupid.
I am grateful for a grandmother whose college graduation present to me was to fund my first IRA contribution, the foundation of a lifetime’s slow-and-steady track-the-market approach to finances. I feel for the masses of people so desperate that they are putting all their chips on a single number of the roulette wheel by swinging for the fences with crypto coins, meme stocks, and prediction markets. I can see how that may be just one level removed from a populist “burn it all down” bet that rejects institutions and places faith in an autocrat because anything seems better than their current situation.
I remember spending eleven years as a civil servant subject to the 20/50 rule (the individual and annual dollar thresholds for allowable gifts) while happily accepting compensation well below my market rate because the mission offered a sense of purpose and belonging. I look now with horror upon a sitting president nakedly monetizing political office while driving us internally toward the politics of fear, scarcity, and xenophobia as well as externally pivoting us toward policies of bullying, isolationism, and transactionality.
I remember going to physical libraries to find books and papers for research projects, delivering pizzas with a paper map in a car without power steering, programming in a world where every line of code sprung from my own fingers, and writing in a world where my only editors were human. The current world better serves me in a variety of ways but with an assortment of attendant Faustian bargains. I am struggling to enjoy the leverage without unduly surrendering volition, creativity, and resilience. And I wonder how fundamentally misaligned the purveyors of the underpinning technology may be with me specifically and society generally.
Not so long ago we conscripted forces from a human population under democratic rule that engendered a certain back pressure on the worst excesses. Now we are moving toward extreme centralization and automation, coupled with pervasive surveillance, under creeping authoritarianism, that consolidates power and privatizes operations while disconnecting the pain signal. This feels dangerously unaccountable.
Nobody venerates hypocrisy, yet hypocrisy is the tribute that vice pays to virtue, and worrisomely the Overton Window has shifted so far that we appear to have discarded even this timeless vice.
This unraveling of the social fabric at multiple levels and across multiple facets surrenders all manner of intangible goods, soft power, and systems resilience.
And the rapidity with which change is arriving keeps us from metabolizing it.
Everyone has a test environment. Sometimes it isn’t prod. Sometimes it can only be prod.
Hello, Venezuela.
Discover more from All The Things
Subscribe to get the latest posts sent to your email.