Nom Nom The Seed Corn, or… The Gradual Ceding Of Our Legacy To Alien Intelligences

Circa ~2015, when beginning to fool around in airplanes, I experienced a troubling revelation — I had shit-for-brains when it came to reading maps.

The pervasive availability of real-time turn-by-turn directions on pocket-sized super computers had over many years inexorably atrophied the portion of my brain that twenty years earlier served me well as a pizza delivery boy finding houses in the dark with a paper map. I likewise realized that a lifetime of driving automatic transmission cars had deprived me of a certain practice that would provide the feather touch required to finesse operating the rudder pedals with my feet.

And so I embarked on some cross-training that would benefit both my life generally and this flavor of aviation specifically — I switched to driving a manual transmission car and resolved never again to embark on a road trip of any variety without a clear mental map on how I would get where I was going. I still use and appreciate Google Maps, for sure, but I endeavor to employ it more as a congestion-based routing optimizer than as a mission critical navigation director.

But instead of fast-forwarding to a world where autonomous cars yield an environment where most humans lack the basic skills to operate even terrestrial vehicles never mind airborne ones, let’s go backward…

The Creeping Prevalence Of AI

I could almost laugh, if our gerontocracy weren’t driving a looming crisis, about the degree to which many of our politicians even as late as the latter part of the 2010s held the belief that powerful AIs still existed only in the realms of science fiction and pitch decks. In some cases this obliviousness may have stemmed from a disconnectedness — how many of them eschewed even email as long as they could manage, never mind social media? In other cases, ones where they had direct interactions, the progress proved more subtle and gradual — over a span of fifty years cars evolved from including perhaps a few tens of thousands of of lines of code to likely hundreds of millions, all the while the Machine Learning algorithms powering autonomous vehicles rendering that metric decreasingly meaningful.

Sometimes events dropped like bombshells, as when Deep Blue defeated Kasparov at chess in 1997 or Watson bested Jennings at Jeopardy in 2011, and probably nobody needs reminding of how ChatGPT burst onto the scene earlier this year. More generally, though, a natural reaction to the emergence of various technologies was likely a subtler “ooh — that’s neat” or “huh — I wonder how they do that?”, not “it’s the end of the world as we know it”. Even those subtler moments, though, often presented sea change level demarcations in the progression, most notably when the AI snake began eating its own tail, exemplified by a few moments in particular — the advent of Google Suggest in 2004, the appearance of Facebook’s “Like” button in 2009, and GMail’s integration of Smart Compose in 2019.

Awakening of the general public to the threats that had gradually woven themselves through the fabric of our society would require the weaponization of social media by domestic and foreign powers alike in ways too obvious to ignore as they spilled over into real world consequences. My only hope for any kind of sanity in the upcoming 2024 presidential election centers on LLMs having made Deep Fakery so accessible that people will stop believing anything they read on the Internet.

No, Really, This Time Actually Is Different

In earlier epochs one could readily wave away advances as not representing existential crises for most professionals. In the realms of gaming the majority of us could rest easy on the basis of not having our livelihoods and identities wrapped up in the games of Chess, Jeopardy, Go, Poker, or StarCraft (while still being able to glean useful brain calisthenics from them as hobbies as AI eclipses our abilities in them). In the realms of non-gaming professions the advances generally looked like drudgery eliminating technologies that freed up more time for the main body of work, such as with speech-to-text software liberating medical professionals from laboriously transcribing their notes (though I’m told that it took a while for mainstream systems to stop mis-transcribing “the patient was prepped and draped in the usual fashion” as “the patient was stripped and raped in the usual fashion”).

The emergence of Generative Artificial Intelligences, however, stands poised to upend this order, and the follow-on acts of Artificial General Intelligences further still. “Come now”, we professionals might defensively say, “we’re just going to see things like call centers get automated”, imagining our own disciplines somehow impervious. Well, sure, at first, but with some of the AI craze leveling off almost a year after the release of ChatGPT we would do well do remember a recurring property of our perceptions — we reliably over-estimate progress in the short-term owing to an under-estimation of implementation complexity while simultaneously under-estimating progress in the long-term owing to a failure to comprehend the power of compounding. Today’s call center employees may be next year’s lawyers, the following year’s radiologists, and the subsequent year’s rank-and-file programmers. Meanwhile tomorrow’s AIs will increasingly run atop compute substrates and training sets generated by their cyber-ancestors.

I have been playing with ChatGPT extensively for programming tasks this year in an evolving attempt to ascertain where its strengths and weaknesses lie, both to estimate its impact on our ecosystem generally and to tune my ability to employ it usefully specifically, not just today but also with an eye toward imagining how its technical progression and ecosystem interactions will shape our future. The pithiest way I can summarize its present capabilities in my domain may be that of a high-output yet very high-maintenance junior programmer mashed up with a forgetful and schizophrenic idiot savant from each and every specialty area. Its ability to take highly nuanced and idiomatic conversational language and emit useful chunks of code does truly amaze but it nonetheless increasingly struggles the more integration intensive, architecturally focused, or bleeding edge my areas of exploration and the longer a session continues. For now…

The more I have used the system, though, the more juice I have found I can squeeze out of it already. I am not sure to what degree that apparent improvement over time stems from my own improved prompt engineering (topic selection, chunk sizing, question phrasing), or its improved handling capabilities (richer model, better heuristics, more resources), but I am increasingly finding ways to make this tool work for someone of my archetype (principal/generalist/inventor/integrator), and the implications of this are by turns empowering, fascinating, and terrifying. For my stage and specialization I often find that the reality summarizes as “I know what I want to build, I know the shape and componentry needed, and I’ve even built something vaguely like this before, but I could use just a little bit of help with certain fiddly details to make things happen a faster”, and that last bit is where ChatGPT is saying “I can help, at least a lot of the time, at least enough to be worth the trouble, if only you’re a little bit clever in the way you learned to be with Google twenty years ago”.

That really speaks to the theory that we will see a near-future bifurcation wherein some people attain superpowers while others risk being left behind. Meanwhile we are fostering an unpredictable and potentially soul destroying long-term situation where Generative AI’s output becomes the next iteration of training data for the same kind of systems, a feedback loop in which we lose not just control of our species’ destiny but also our sense of being and purpose. We need to think hard about that. We probably can’t stop AI’s ascendancy but we might find clever ways to ensure that it elevates most people instead of it fomenting our accidentally either committing species-level genocide against ourselves or handing unprecedentedly concentrated power to people whose goals may not align with humanity’s generally.

On The Cultivation Of The Future’s Humans

I feel rather blessed by the timing of my landing in the tech ecosystem. I began programming at a moment when much of what constituted a product or system ran on a single desktop, server, or some combination thereof, with any intervening networking simple enough as (usually) to prove transparent. As we progressed toward and through the knee of the exponential curve of technological development I enjoyed sufficient neuroplasticity to revel in the luxury of absorbing the concepts as they came into existence. How lucky I have been to start with a strong classical grounding in computer science and systems programing (and a love of Emacs) followed by the as-it-emerged gradual and incremental metabolism of such diverse and powerful technologies as…

  1. The Internet
  2. Version Control (that didn’t suck)
  3. Relational Databases (that didn’t blow)
  4. Object/Relational Mappers (that I didn’t write myself)
  5. Rich Web Applications (that didn’t involve my simulating a Canvas element)
  6. Database Migrations As Code
  7. Pervasive Cryptography
  8. Containerized Application Deployment
  9. Microservice Architectures
  10. Observability Tooling
  11. Software Defined Data Centers
  12. Infrastructure As Code
  13. Elastically Scalable Systems
  14. Public Compute Clouds
  15. Machine Learning
  16. Skynet Large Language Models

I had already been thinking, before the emergence of ChatGPT, something like — “well, damn yo, but don’t today’s college CS grads have a bewilderingly broad and deep stack to learn, so no wonder they are tempted to be specialists and gravitate toward something as focused as UX or ML…”

Now it’s even worse for today’s junior programmers. ChatGPT is nowhere near being able to construct, debug, and evolve complex systems, but it is quite capable of answering requests like “please generate a snippet of code that solves such-and-such a narrow sub-problem for me”, the result of which I can then massage into a larger context. That is good for Today Me, it’s quite bad for Today Grads, and zooming out I perceive the severing of a clear path from Today’s Grads to Tomorrow’s Seniors that makes me fret for Tomorrow Me specifically and even Today Humanity generally. Waxing ever so slightly hyperbolic — the only people I envy less than the people trying to pick a college major today are the ones who picked a college major four years ago. The road to being an expert is long and we have suddenly knocked out a bridge for many people to get there.

How strange a contrast to a world that existed not so long ago wherein most people could easily spend a whole life working in, if not a single company, then certainly a single career. How bizarre that just thousands of years ago we could expect thousands of years between successive technological ages but now major revolutions are arriving every year or arguably even more frequently as we enter the age of Generative AI.

As a society we need urgently to think hard about labor force participation and the domain of humans. I shudder to think of a future where humans have lost the ability to write, never mind program, for this seems a path where we lose the ability to think. My formative professional years took place at a federal agency that had experienced a sustained hiring freeze between the end of The Cold War and the events of 9/11, the consequence of which (I can see with hindsight) was an irrecoverable loss of continuity which harmed both the mission and the participants. I foresee a looming analogous crisis for our workforce generally if we fail to recognize the risks and act accordingly. The time for proactive remedies is now but the touch must be light, nuanced, open-minded, and far-sighted.

Leave a Reply