<span class="mw-page-title-main">Hopfield82a</span>
Elena & Fabrice's Web

Neural networks and physical systems with emergent collective computational abilities. J. J. Hopfield in Proc. Natl. Acad. Sci. 79:2554 (1982).  What the paper says!?

This is a seminal paper where Hopfield shows that an Ising-type system can be "trained" to behave as an associative memory (now known as a "Hopfield network"). This answers positively the question of whether collective phenomena in systems of simple interacting neurons have useful "computational" functions, thereby igniting the neural network era of Artificial Intelligence:

This paper examines a new modeling of this old and fundamental question (4-8) and shows that important computational properties spontaneously arise.

References 4-8 being:

Interestingly, William Little is among the pioneers of this field here too!

There he makes the key observations:

Computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons).

(First line of the abstract), as well as:

  • this is inspired from neurobiology.
  • this can be implemented by electronic circuits.
  • this is independent from details of the model.
  • the network can be trained (any wished solution can be made a stable point).

He also puts emphasis on elements whose actual relevance are less clear, such as synchronicity, symmetric pairing of the neurons, etc.

The Hopfield network as described by Hopfield:

It differs from the perceptron (earlier attempts) by all-to-all couplings. This accounts for the new results:

All our interesting results arise as consequences of the strong back-coupling.

It is also asynchronous, so less lookalike a clocked computer, and dealing with abstract encoding as opposed to signal processing.

The training is Hebbian-inspired, with weights of the connections defined by the information to be encoded:

The size of the problems tackled:

Results of the simulation (fairly fast).

He also observed chaotic wandering in a small region of state space. He precises that ««Simulations with N꞊100 were much slower and not quantitatively pursued.»»

The maximal information stored for N=100 neurons occurred at about n=13 possible memories.

On limitations and new memories:

On the performance of the brain to retain memories:

Quantitative results:

As a formal example of something to be stored in memory, Hopfield seems not to have a more vivid example than Statistics of the Two-Dimensional Ferromagnet. Part I. H. A. Kramers and G. H. Wannier in Phys. Rev. 60:252 (1941).:

The paper has a funny footnote: