Nature inspired algorithms for computers for problems without obvious “normal” solutions. (If you want to use computer-inspired algorithms for nature, that is the dual to this, bio-computing.)
The problem to be solved is usually a search/optimisation one. Normally evolutionary algorithms are in here too. (ant colonies, particle swarms, that one based on choirs… harmony search?), Typically these are attractive because they are simple to explain, although often less simple to analyse.
Is this is a real field separate from all the things that looks similar to it? Often they are asymptotically the same as a conventional stochastic method. e.g. particle swarms and particle systems, or evolution and stochastic gradient descent. Somewhere in this mix is artificial chemistry, where you use a simplified model of a natural process as a simplified model for computing about other natural processes, or showing that natural processes might be computing, or something like that.
…and quorum sensing? How about that? Multi-agent systems?
Points of contact between classical neural nets and artificial neural nets are always entertaining. Beniaguev, Segev, and London (2021) is one. See Fruit Fly Brain Hacked For Language Processing for a articial-neural-networks-meeting-their-ancestors moment.
Genetic programming
See genetic programming.
Forward-forward networks
Neural networks without backprop are “more” biologically plausible. Here is a class of such networks. (Hinton, n.d.; Ren et al. 2022)
The aim of this paper is to introduce a new learning procedure for neural networks and to demonstrate that it works well enough on a few small problems to be worth serious investigation. The Forward-Forward algorithm replaces the forward and backward passes of backpropagation by two forward passes, one with positive (i.e. real) data and the other with negative data which could be generated by the network itself. Each layer has its own objective function which is simply to have high goodness for positive data and low goodness for negative data. The sum of the squared activities in a layer can be used as the goodness but there are many other possibilities, including minus the sum of the squared activities. If the positive and negative passes can be separated in time, the negative passes can be done offline, which makes the learning much simpler in the positive pass and allows video to be pipelined through the network without ever storing activities or stopping to propagate derivatives.
No comments yet. Why not leave one?