Controllerism

Making thing happen by waving your arms about on stage

November 18, 2014 — December 10, 2019

machine learning
making things
music
real time
UI

“ […] Now. Where is it?”

“Where is what?”

“The time machine.”

“You’re standing in it.” said [X].

“How… does it work?” [Y] said, trying to make it sound like a casual enquiry.

“Well, it’s really terribly simple,” said [X], “it works any way you want it to. You see, the computer that runs it is a rather advanced one. In fact it is more powerful than the sum total of all the computers on this planet including — and this is the tricky part — including itself. Never really understood that bit myself, to be honest with you. But over ninety-five per cent of that power is used in simply understanding what it is you want it to do. I simply plonk my abacus down there and it understands the way I use it. I think I must have been brought up to use an abacus when I was a… well, a child, I suppose.

“[R], for instance, would probably want to use his own personal computer. If you put it down there, where the abacus is, the machine’s computer would simple take charge of it and offer you lots of nice user-friendly time-travel applications complete with pull-down menus and desk accessories if you like. Except that you point to 1066 on the screen and you’ve got the Battle of Hastings going on outside your door, er, if that’s the sort of thing you’re interested in.”

[X]’s tone of voice suggested that his own interests lay in other areas.

“It’s, er, really quite fun in its way,” he concluded. “Certainly better than television and a great deal easier to use than a video recorder. If I miss a programme I just pop back in time and watch it. I’m hopeless fiddling with all those buttons.” […]

“You have a time machine and you use it for… watching television?”

“Well, I wouldn’t use it at all if I could get the hang of the video recorder”

On the dark art of persuading the computer to respond intuitively to your intentions, with particular regard to music.

Figure 1

A more general problem than gesture recognition, which hunts for particular sequences of movements in an input data stream. Here we are concerned with general options. Either way we are probably thinking of using controller interfaces to extract that information.

In non-musical circles they call this “physical computing”, or “natural user interfaces”, or “tangible computing”, depending upon whom they are pitching to for funding this month.

How do I plug these into each other in an intelligible, expressive way so as to perform using them?

This question is broad, vague and comes up all the time.

Ideas I would like to explore:

1 Random mappings

  • sparse correlation
  • physical models as input
  • random sparse physical models as input
  • annealing/Gibbs distribution style process
  • Der/Zahedi/Bertschinger/Ay-style information sensorimotor loop

2 “Copula” Models

And related stuff.

Copula are an intuitive way to relate 2 or more (monotonically varying?) values by their quantiles.

The most basic one is Gaussian, where the parameter of the copula is essentially the correlation. For various reasons, I’m not keen on this in practice; I do not have time to go into my intuitions as to why it is so, but Gaussian tails“feel” wrong for control. Student-t, perhaps?

See copulas.

3 UI design ideas

  • circular sequencer
  • gesture classifiers
  • accelerometer harvest for smartphone
Figure 2