About

In late 2023, I did a series of podcast episodes on ecological and embodied cognition. (Start here, even though that episode is marked as optional. Note that all episodes have transcripts.) In the last episode, I identified “Illinois-style design”, which is biased toward creating data structures or classes that are faithful representations of the program’s domain. In iterative/incremental design, Illinois style refactors toward… 

a design that feels right, where “feels right” is some (I’d argue) semi-intuitive, semi-learned combination of “elegance”’; a resemblance to the abstractions used in fields we envy, particularly math and physics; a “shape” that seems likely to support making more changes like the ones the app has already undergone; and – frequently – some kind of faithful mapping to part of the app’s environment.

I contrasted that to a tentative “New Hampshire style” that’s faithful to biology. This blog will document my exploration of that style. Note that my goal is intellectual and programming fun, not the development of a style better than Illinois style. (But wouldn’t it be cool if there were useful bits? that could be incorporated into Illinois-style practice?)

I will explore by creating a particular app – what I’m calling the “app-animal” – to help me write podcast scripts. The app-animal and I will share an environment, a complex document containing structured text, integrated notes, and so on. The app-animal will watch the changes I make to our shared environment. It may react to some of them by making its own changes (independently and asynchronously). 


I am particularly interested in these questions:

  1. There are patterns – common solutions to typical problems – used in the brain. Which software design patterns or mechanisms “fit” those brain patterns? For example, the brain is fond of self-reinforcing loops of neuronal firings. The way those loops behave matches up with the actor model of concurrency, especially as implemented in the Erlang virtual machine (where actors are cheap). So it makes sense to say app-animals should be built from actors. 

    Note that it’s not super-clear what I mean by “a software mechanism fitting a brain pattern”. Figuring that out is part of the project. Right now, it means something like breaking both the mechanism and pattern into parts and saying “see how this part here is like that part over there?” often enough that the whole analogy is convincing.

  2. Evolution proceeds by small steps. That’s OK, because I’m used to programming in the same way, being basically a follower of Extreme Programming. But the small steps I’m used to are driven by Illinois-style concerns. What are the small steps a New Hampshire style programmer would use? How do they differ from Illinois steps? I’m especially interested in refactorings because Illinois-style refactorings are, at root, about the programmer more than the program. The usual refactorings are justified by the belief that applying them will make a program more understandable, both today but especially in the future.

    Metabolic Metro Map This is what evolution considers great design. (attribution)

    But any refactoring that evolution does is not about any “programmer." Certainly, the body has not been refactored to make physicians’ lives easier. (Ask my wife how I know.)

    But humans aren’t evolution: they can’t rely on millions of deaths and many, many generations to work out the kinks of their design. So refactorings have to be biologically plausible and push in the direction of human maintainability – how does that work?

  3. Think of evolution as preferentially solving problems with the simplest thing that could possibly work. Its stock solution is to create a new direct action link that sits between an affordance in the environment and a command to motor neurons. That model contains the smallest amount of data and processing needed to accomplish the purpose. And, in particular, it’s hard to think of the data as representing anything about the environment. It is not a map or mirror of nature.

    Some ecological cognition people think direct action links and similar non-models of the environment might be sufficient to explain all behavior, even language, but I venture to guess that most do not. Andy Clark supposes that evolutionary forces can indeed push toward complicated, true representations:

    “If a creature needs to use the same body of information to drive multiple or open-ended types of activity, it will often be economical to deploy a more action-neutral encoding which can then act as input to a whole variety of more specific computational routines. For example, if knowledge about an object’s location is to be used for a multitude of different purposes, it may be most efficient to generate a single, action-independent inner map that can be accessed by multiple, more special-purpose routines.”

    How can evolution toward representations be a model for incremental program design? Operationally, how do you start with a soup of small, ephemeral actor objects that contain teensy chunks of non-representational data and move – by small steps – toward a large, persistent actor object that looks like it contains a faithful representation of part of the domain?

    I’m actually more interested if it’s possible to move toward large, persistent actors that are broadly useful but still aren’t representational. I know how to refactor to domain models. It’ll be interesting to see how to create (and maintain, and extend) complex data structures that don’t pretend to map onto reality. I want to ask are specifically representational models just a mental crutch for programmers? And: how far can we hobble forward without the crutch?