Introduction to Neural Networks - Key Points
- Neural networks are computers modelled after the biological description
of the brain.
- Typically they possess tens to hundreds of nodes with many tens
of thousands of connections between these nodes.
- The nodes are simplified models of real neurons. Typically
after they are receive some
external input their state can be described
as either firing or not firing.
- Often we can simulate the operation of such a network
with a program that executes on a traditional computer.
- We have considered two types of artificial neural network,
the Hopfield and Perceptron models.
- This network is capable of building so-called
'content-addressable' memory and simulating the recall process of
- Every node or neuron is wired to every other.
- In general the firing activity pattern changes with
time until a stable configuration is reached.
- Memories are built in as `stable firing patterns' of the
- In this way it is possible to recover perfect
memories or images with only partial information.
- The network requires a smart `teacher' to set up its
connections to store any given memory.
- Here the network consists of three layers, an input layer,
hidden layer and output layer. Signals only feed forward
from input to hidden and then onto the final layer.
- No connections exist within layers.
- These networks can deal with problems in pattern
recognition and classification and simple decision making.
- The programming takes place by a learning
process, somewhat similar to that seen in humans. It
does not approach the solution of a problem by employing
a set of rules (a computer program).
- They have already seen wide application in fields
as diverse as medical diagnosis, image processing, text recognition
and speech synthesis and financial forecasting.
- The Perceptron can be used for solving problems for which
there are no simple rules but plenty of possibly noisy
examples of solutions. Once trained they are very efficient at
giving good solutions to these problems.
- The network shows a limited ability to
generalize from its training data. This means that
the network can `learn' the special features of its
input data and use this information to produce useful responses
to new data of the same type.
- Like the Hopfield network the Perceptron suffers from
needing a long, supervised learning procedure if it is
do to perform reasonably.
- While artificial neural networks have had some success
in quite a wide variety of areas and clearly bear some
resemblance to the brain they suffer from a major disadvantage.
- The problems with these networks
is that they require a lengthy, supervised teaching process to
produce useful responses. This process is not seen
in the biological situation.
- Our conclusion is that while these networks are
relatively successful what we really need is a network capable