The Biological Neuron and its Computational Counterpart

How did our early understanding of neurons inspire today’s neural networks?

Eduardo Alvarez
5 min readSep 13, 2022

Today, we perform a high-level overview of the biological neuron and attempt to outline the key components that inspired its computational model in 1943. Though the artificial neuron has evolved significantly, studying the McCulloch-Pitts model builds a better understanding of modern neural networks.

Courtesy of IBM

Biological Neurons

Biological neurons are complex and unusual-looking cells found in animal brains.

For our purposes, the parts of the structure that we will focus on are the Axon, Telodendria, and Synaptic terminals:

  • An Axon is a long extension that branches away from the main cell body. The Axon can be slightly or thousands of times longer than the cell’s body.
  • Telodendria are extensions that branch out from the tip of the Axon.
  • Synaptic terminals or Synapses are connected to other neurons’ dendrites or cell bodies.

Biological neurons use these structures to transmit signals (action potentials) along axons which cause the synapses to release chemicals called neurotransmitters. Upon receiving X amount of a specific neurotransmitter, the receiving neuron will fire or inhibit its own electrical impulses. Vast networks of neurons, firing or inhibiting electrical impulses, are responsible for the complex signals that enable movement, thought, etc. The brain’s mapped parts indicate that neurons are often organized in consecutive layers, particularly in the cerebral cortex. The architecture of biological neural networks (BNNs) is still an active research area.

In the image above, we see a diagram describing the neocortex’s standard six-layered (I-VI) arrangement. These arrangements of layered neurons handle the input and output signals that control motor functions or visual processing.

The motor cortex has an expanded output layer (layer #5) due to the need to send complex signals through axons down the spinal cord. On the other hand, the primary visual cortex must handle many dense/complex signals and therefore has an expanded input layer #4 composed of 3 sublayers.

Hopefully, you can start to piece together some similarities between the functional behavior of BNNs and the architectures of modern ANNs.

Computations with Artificial Neurons

In 1943, Warren McCulloch (neuroscientist) and Walter Pitts (logician) proposed a computational model of a biological neuron. In their seminal paper “A Logical Calculus of Ideas Immanent in Nervous Activity,” they aimed to emulate the behavior of biological neurons with the concept of threshold binary activation functions.

Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic.

- McCulloch and Pitts (1943)

This model proved that the artificial neuron, which has one or more binary (on/off) inputs and one binary output, could be arranged into complex networks capable of various logical computations. Let’s use the following examples to develop a practical understanding of their function.

  • Network #1: C =A, is a basic identity function whereupon neuron A’s activation, neuron C is also activated. When neuron A is off, then neuron C is off as well.
  • Network #2: C = A and C, perform a logical AND expression where neuron C is only activated if both neurons A and B are active, i.e., a single input signal is not enough to activate neuron C.
  • Network #3: C = A or B, performs a logical OR expression where neuron C is activated if either neuron A or B are active.
  • Network #4: C = A and not B, performs a more complex operation where neuron C is only activated if A is active and B is not active. Another way to think about this is if neuron A is always active, then C is only active when B is off. These kinds of complex logical operations are most similar to what occurs in BNNs.

In this computational model of the biological neuron, artificial neurons A and B emulate the behavior of synapses emitting neurotransmitters. Like its biological counterpart, the receiving neuron C decides whether to fire/activate based on a set of predefined rules.

It is essential to understand that this was a very early model of biological neurons. One of the difficulties with the McCulloch-Pitts neuron was its simplicity:

  • It only allowed for binary inputs and outputs
  • It only used the threshold step activation function
  • It did not incorporate the weighting of the different inputs.

Evolving Beyond McCulloch-Pitts

In 1949, Donald Hebb proposed what has come to be known as Hebb’s rule. He states,

“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
- Donald Hebb

Hebb proposed that when two neurons fire together, the connection between the neurons is strengthened. He also suggested that this activity is one of the fundamental operations necessary for learning and memory.

The first perceptron, one of the most straightforward ANN architectures, was invented by Frank Rosenblatt by applying the findings of both McCulloch-Pitts and Hebb. The Rosenblatt perceptron was essentially a McCulloch-Pitts neuron that learned through the weighting of inputs, i.e., Hebbian Learning.

Thank you for reading!

References

Originally published at https://www.linkedin.com.

--

--

Eduardo Alvarez

AI Performance Optimization Lead @ AMD | Working on Operational AI, Performance Optimization, Scalable Deployments, and Applied ML | ex-Intel Corp.