The simplest thing

Minimum viable whatever, worse is better, PC-losering, Postel chaos, Burkean engineering


I frequently find it complicated to discern what the simplest thing is. This is a hard problem, e.g. when designing software, experiments, or research questions. It is a notable weakness of mine, and why I am comfortable asserting I would never have invented Deep Learning, which is all about applying an asinine solution to a problem in a stupid way, which turns out to be just good enough to get billions of dollars funding to do it better. That is the effective kind of simplicity.

How about I collect some notes about the fraught question of deciding as early as possible, what the minimum viable product is? Which cruft is structural? And which is yak shaving?

Greg Kogan frames it as a question of outages:

Take the ship’s steering system, for instance. The rudder is pushed left or right by metal rods. Those rods are moved by hydraulic pressure. That pressure is controlled by a hydraulic pump. That pump is controlled by an electronic signal from the wheelhouse. That signal is controlled by the autopilot. It doesn’t require a rocket scientist or a naval architect to find the cause of and solution to any problem:

  • If the autopilot fails, steer the ship manually from the wheelhouse.
  • If the electronic signals fail, go to the rudder control room to control the pump by hand, while talking with the bridge through a simple sound-powered phone.
  • If the hydraulics fail, use the mechanically linked emergency steering wheel.
  • If the mechanical linkage fails, hook a chain to both sides of the rudder and pull in the direction you want!

This is related to Worse is Better. Richard Gabriel, in the original essay, expounds:

I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase “the right thing.” […] I will call the use of this philosophy of design the “MIT approach.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation.

The worse-is-better philosophy is only slightly different: … the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design.[…]

Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the “New Jersey approach.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach.

However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach.

More generally this connects to a question I face of what way of doing things is, over all, simplest. What is hubristic Not Invented Here-type high modernism and what is the clarity of starting over? This perhaps connects to metis and friends via Chesterton’s fence, Postel’s Law, YAGNI, Gall’s Law…

Steven Wittens:

The reason software isn’t better is because it takes a lifetime to understand how much of a mess we've made of things, and by the time you get there, you will have contributed significantly to the problem.

A test case here is rewriting software and the attendant complexities. Some of these are called second system effects. When is it simpler to start over with a shinier thing? Adam Turoff, Rewriting software:

If you’re dealing with a small here and a short now, then there is no time to rewrite software. There are revenue goals to meet, and time spent redoing work is retrograde, and in nearly every case poses a risk to the bottom line because it doesn’t deliver end user value in a timely fashion. […] If you’re dealing with a big here and a long now, whatever work you do right now is completely inconsequential compared to where the project will be five years from today or five million users from now. Requirements change, platforms go away, and yesterday’s baggage has negative value — it leads to hard-to-diagnose bugs in obscure edge cases everyone has forgotten about. The best way to deal with this code is to rewrite, refactor or remove it. […]The key to estimating whether a rewrite project is likely to succeed is to first understand when it needs to succeed.