Control and implementation in prototype 2

Here’s the more technical explanation of prototype 2. I describe the modules or, metaphorically, “neural clusters” that information flows through. For the Elixir-curious, I describe some of the Elixir features I rely on. The really nitty-gritty details are in italic text, and I’ve tried to make them skippable.

Continue reading →


Moving a fragment in prototype 2

This prototype implements a strategy different from that given in the podcast episode. The strategy was in part inspired by how prototype 1 was too hard. In contrast to that one, this seems something worth building on, though issues remain. I’ll show and explain the code in a separate post.

Keywords: scanning, fail fast, surprise, brain as prediction engine

Continue reading →


Predictable asynchrony problems

The first prototype is not awfully informative, except to reinforce what everyone knows: having a lot of asynchrony in a program is a great way to produce race conditions.

Continue reading →


Styles of perception: predator and prey

In the first prototype (perhaps to be described later), I adopted what I think of as a predator approach to perception. In the next prototype, I’m going to switch to a prey approach. Necessary background: in the prototype, the app-animal is focused on a single paragraph within a script. It’s focused there because that’s the paragraph that contains the cursor (insertion point). But what does “focused” mean? For the predator approach, think of a cat carefully watching a mouse hole.

Continue reading →


Just enough Elixir for the prototypes

I’ll be showing Elixir code in future blog posts. Elixir looks a fair amount like Ruby, and a lot of it is pretty straightforward to someone who knows a few languages. However, the way it handles processes (its version of actors) is fairly special and has its own notation. Since I’ll be using processes as a key building block, I’ll explain them here. First, some essential background… Immutability Elixir has strictly non-mutable state: you don’t get to modify any values.

Continue reading →


A design constraint: bodies are kludges

In a conversation on Mastodon, Jeff Grigg wrote: the better I understand real biology, the better I understand how absolutely horrifyingly bad its “designs” often are! This is entirely true. I emphasized that in an introduction I cut from the final episode in the series. Here it is, to show I appreciate the problem. It’s pretty much true that this project is going to be a dance where I try to be biologically faithful while also coping with the fact that evolution is capable of dealing with far more complexity than I am.

Continue reading →


New Hampshire style (direct control links)

Here are the guidelines or principles or heuristics I’ll be using for early prototypes. ⊕ The app and the user (hereafter: “Brian”) are considered two independent (asynchronous) animals interacting via an environment. For the sample app, the environment is a document (in the broad sense). The “app-animal” is divided into three systems. The perceptual system observes the environment, looking for new affordances. When one is seen, control is handed into the “control” system, which – typically – instructs the motor system, which changes the environment.

Continue reading →