This morning I read the Atlantic piece Finding The Next Edison* in which Derek Thompson touches upon the notion of “optimal distance”. This idea made me reflect upon Hamming’s 1986 talk You And Your Research, the transcript of which I find myself re-reading every few years for inspiration.
To exist optimally distant from a problem entails possessing an expertise close enough to it that you can understand the basics and tinker with solutions but far enough to avoid an over-familiarity that breeds complacency and groupthink. If, for example, you are laboring to help NASA predict solar-particle events, you’ll likely flounder if you’re an English Literature professor, and you might bog down in the status-quo if you’re a NASA-lifer, but being a retired telecommunications engineer might make you the perfect outsider.
This harkens back to Hamming’s lamentation that…
“When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off.”
Regarding which he counsels us to…
“Somewhere around every seven years make a significant, if not complete, shift in your field. Thus, I shifted from numerical analysis, to hardware, to software, and so on, periodically, because you tend to use up your ideas. When you go to a new field, you have to start over as a baby. You are no longer the big mukity muk and you can start back there and you can start planting those acorns which will become the giant oaks.”
*: We’ll ignore for the moment that Edison was more businessman than inventor and that the piece would have been better titled Finding The Next Tesla.
To The Atlantic Editors,
While overall I agree with the thesis of Carr’s November 2013 piece, The Great Forgetting, I am disappointed with at least one item of proffered evidence that suffers from multiple substantial flaws. In particular, he cites “one recent study, conducted by Australian researchers, [that] examined the effects of [software] systems used by three international accounting firms”. From the limited presentation of the study, it would seem quite likely to conflate causation and correlation, rendering it largely useless. Furthermore, the attribution of the study merely to unnamed “Australian researchers”, with no mention of a study name, institution, publication, or date, makes it impractical to find the source.
We are told that “two of the firms employed highly advanced software” while a third firm “used simpler software”, the former providing a large degree of decision support functionality compared to the latter. Subjected to “a test measuring their expertise”, we’re told that individuals using the simpler software “displayed a significantly stronger understanding of different forms of risk”. And… What are we to believe about this?
The study’s presentation seems to imply that the overly helpful software atrophied the brains of the workers in the two firms using it. Maybe that is true, but we don’t actually know that such a causal relationship exists. As an alternate explanation, perhaps the two firms that use the (purportedly) more sophisticated software generally hire lower caliber accountants and have decided that more intrusive software is the only way to get acceptable results from them. Furthermore… Was proficiency of individuals measured both before and after exposure to the firms’ software? How long did the individuals use the software? Were the sample sizes large enough to avoid statistical noise? Were there any meaningful controls in place for this study?
Carr has apparently interpreted this study in a way that makes it convenient to weave into the larger narrative of the piece, but as it was presented it fails to support his thesis. Such careless cherry picking undermines a very real and otherwise well articulated issue.