IT | Office | Society | World | Self

Information Technology

Science is not truth; it is, instead, a method for diminishing ignorance. --J.M. Adovasio, Olga Soffer, Jake Page

There are historically two hard problems in Computer Science:

  • Cache invalidation
  • Naming things
  • Off-by-one errors
--Phil Karlton + Martin Fowler


I am interested in current challenges in biomedical IT infrastructure. One of these is that the field's capacity to produce data started to diverge from Moore's law in ~2007, with the implication that managing that data -- capturing it, processing it, curating it -- then started to become really expensive. (And that's before you get to the interesting part, which is analyzing it.) There is nothing particularly unique about this experience: for example, the high-energy physics community entered this divergence many decades ago. However, the high-energy physics community have since learned that if you spend X on a particular accelerator that you may well spend .5X on data processing, whereas the life sciences reasearch community has only started to understand the scale of the problem. Thus, life sciences research tends to be hamstrung by the mismatch between its data production capabilities and its data handling capabilities: Network / Compute / Storage / system administration infrastructure thus tends to struggle.

Hard Skills

Five Areas

To be a well-rounded IT professional, I figure that one needs to grasp the basics of five worlds: programming, operating systems, networking, storage, and databases. From programming, one learns concepts like error checking and boundary testing. From operating systems, one learns about resource contention and dead-locks. From networking, one learns how to build reliable systems on top of an unreliable world. From storage, one learns from the intersection of analog (spinning platters) with digital (bytes) plus the spectacular downtime that arises when disk goes goes away. And from databases, one learns the need for consistency along with applications of set theory. To illustrate the benefit of fluency in all five of these languages, consider how:

  • The networking professional, who sees dropped packets as normal, and database professionals, who see databases as either entirely consistent or entirely broken ... can look at one another in dismay.
  • Everyone can utilize the concepts of boundary checking and resource contention when analyzing client/server interactions.
  • Everyone can find utility in understanding that storage systems deliver inconsistent performance, particularly underload, when caching schemes become overwhelmed and reads/writes must reach all the way to (slow) spinning rust.

Writing Samples

Stuart Kendrick

My contributions to applying design principles to IT endeavors.

Data Streams

Professionally, I rely on the following:

Career Management

I use Rob England's advice when planning my career.

Insights from Mathew Crawford Shop Class as Soulcraft into how popular management techniques can drain satisifaction from work and how to reclaim that pleasure.

And from Matthew Stewart The Management Myth: Why the Experts Keep Getting it Wrong into the profound lack of science in popular 'management science'.

     ...Strategic planning, along with the rest of the discipline of 
strategy, is to modern CEOs what ancient religions were to ancient tribal 
chieftains.  Its rain dances (i.e. budgets) and oracles (i.e. forecasts) 
ultimately explain the divine right of the rulers to rule.  It is actually 
a covert form of political theory.  As the tribal imagery of modern 
strategists make plain, however, it is a political theory that has advanced 
little from its origins in the prehistoric era.

In managing my career, I am particularly concerned with the following warning:

It is difficult to get a man to understand something, when his salary depends on his not understanding it. --Upton Sinclair


Gettings Things Done

There are lots of ways to manage workflow. I developed my own, which eventually crashed and burned under my increasing workload. I am a fan of Next Actions, as described in the eponymous book.


Charles Wheelan, a lecturer at Dartmouth, provides a lay person's introduction to statistics: basic education for the well-rounded geek: Naked Statistics: Stripping the Dread from the Data

Architectural Thinking

Increasingly, people seem to misinterpret complexity as sophistication, which is baffling - the incomprehensible should cause suspicion rather than admiration. --Niklaus Wirth

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. --Brian Kernighan

Root Cause Analysis

For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled. --Richard Feynman

IT, like all human endeavours, is plagued by 'magical thinking', the tendency of the human brain, immersed as it is in a foaming stew of hormones, to skip rational thought and to interpret the world in fantastical ways. Fortunately, reality is the ultimate arbiter of such confusions. Thinking analytically is the foundation to my IT successes (not that I claim to be able to do it all the time!)

  • In this presentation, I make a case for employing reason, rather than faith, based arguments.
  • In this seminar, I present my view of how to apply analytical thinking to solving IT problems.
  • Introductions to critical thinking: Don't Believe Everything You Think by Thomas Kida and How Do We Know It's True? by Hy Ruchlis
  • Carl Sagan reviews the underpinnings of analytical thinking in The Demon-Haunted World.
  • Edward Tufte illustrates clear reasoning about data in his books: Beautiful Evidence, The Visual Display of Quantative Evidence, Envisioning Information, and Visual Explanations. My notes from one of his lectures, along with an example of how I employ these skills.

Credible explanations grow from the combined testimony of three more or less independent, mutually reinforcing sources -- explanatory theory, empirical evidence, and rejection of competing alternative explanations. --Edward Tufte


John Day

A long timer player in the 'Net landscape, Day distills his experience into patterns.

  • Patterns in Network Architecture: A Return to Fundamentals
These two types of protocols [data transfer and application] tend to alternate in architectures. The MAC layer does relaying and multiplexing, the data link layer does "end-to-end" error control; the network layer relays, the transport layer does endot-end error control; mail protocols relay, hmmm no end-to end error control and somtimes mail is lost ...
John Gall

A retired pediatrician, I'm fond of Gall's insights into complexity and systems. Reading his stuff, I feel recognition, the pleasure of shared experience, and experience regular chuckles. Systemantics has a long way to go before we can translate its lessons into actionable behavior -- nevertheless, I feel more peace at the office, realizing now how hard these problems are.

  • Systemantics: How Systems Work and Especially How They Fail


As I build, I'm always thinking: how am I going to fix this when it breaks? And I modify the design as I go, to make sure that I can get at it, easily, years from now, when it breaks. -- Dave, who renovates houses for a living

These authors describe ways of thinking about IT design which fit my world-view.

Henry Petroski

A mechanical engineer, Petroski suggests building failure into the design process, in order to achieve success.

  • Success Through Failure: The Paradox of Design
When a complex system succeeds, that success masks its proximity to failure. Imagine that the Titanic had not struck the iceberg on her maiden voyage. The example of that "unsinkable" ship would have emboldened success-based shipbuilders to model more and more and larger and larger ocean liners after her. Eventually, albeit by chance, the Titanic or one of those derivative vessels would likely have encountered an iceberg -- with obvious consequences. Thus, the failure of the Titanic contributed much more to the design of safe ocean liners than would have her success. That is the paradox of engineering and design.
... the only way to test definitively a large civil engineering structure is to build it in anticipation of how nature will challenge it and then let nature take its course. This fact of large-scale engineering demands careful, proactive failure analyses.
Things that succeed teach us little beyond the fact that they have been successful; things that fail provide incontrovertible evidence that the limits of design have been exceeded. Emulating success risks failure; studying failure increases our chances of success. The simple principle that is seldom explicitly stated is that the most successful designs are based on the best and most complete assumptions about failure.
Over the years, shuttle managers had treated each additional debris strike not as evidence of failure that required immediate correction, but as proof that the shuttle could safely survive impacts that violated its design specifications. --Diane Vaughan
Fail early and often.

Donald Norman

Engineer, designer, professor, consultant -- Norman integrates a range of sensibilities into his insights, from empathizing with the user (sociable design) to developing simple conceptual models to encapsulate complex copabilities.

  • Living with Complexity

Theo Schlossnagle

A partner at Omniti, Schlossnagle designs and installs large-scale IT environments supporting e-commerce.

  • Scalable Internet Architectures
The architecture must allow operations and development teams to watch things break, spiral out of control, and otherwise croak. Watching these things happen leads to understanding the cause and in turn leads to solutions.

Atul Gawande

Ostensibly, Gawande writes about his field, surgery; in fact, his thoughts apply to just about any field in which one wants predictable results.

  • The Checklist Manifesto
  • Better

Miscellaneous Links

Incident Analysis

Learn from our mistakes.

Sydney Dekker

A professor at Lund University, Dekker analyzes high-profile accidents.

  • The Field Guide to Understanding Human Error

Of course human error is the proximate cause of most service disruptions.

  • People are the only ones who can hold together the patchwork of technologies introduced into their worlds; the only ones who can make it all work in actual practice;
  • It is never surprising to find human errors at the heart of system failure because people are at the heart of making these systems work in the first place. --Sydney Dekker

The interesting questions come next: what structural conditions encouraged humans to make those errors?

Duncan Watts

A researcher at Yahoo with a background in engineering, mathematics, and sociology, Watts highlights the driving effects of chance in the normal accident model for understanding both small and large stumbles. And provides support for the measure-and-react approach to strategic planning.

IT Infrastructure Library

I have a few days of basic ITIL training under my belt; our department has just begun to introduce ITIL thinking into how we deliver service. I'm struggling to find a methodology for service management/delivery which makes sense to me -- the closest I've found thus far:

  • The Visibile Ops Handbook: Implementing ITIL in 4 Practical and Auditable Steps by Kevin Behr, Gene Kim, and George Spafford

I buy the idea that effective change management delivers the agility and stability we need these days; and I'm convinced that repeatable builds contribute substantially to my sanity as a tech (not to mention reproducibility and MTTR). I'm less confident that this stuff works quite as smoothly as its proponents describe.


I aspire to these approaches to designing and writing code.

Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. --Brian Kernighan

Operating Systems

The basics, as described by three leading players in the field of operating system design.

Federally Funded Research

I've spent most of my career working in the federally-funded research space. The relevant accounting rules substantially influence organizational culture at institutions receiving such grants; over the years, I've developed a lay-person's guide to how indirect vs direct funding works.

Last modified: 2017-05-26