We have seen that artificial neural networks based on simple models for neurons and their connections can be very successful both in simulating the memory storing and recall process (the Hopfield network) and for pattern-based decision making and learning (the Perceptron model). Both of these networks have already found wide application outside of neuroscience - in fields as diverse as signal processing, recognition and synthesis of speech, financial forecasting and modelling, and medical diagnosis.
In general, neural networks provide good solutions to problems with the following features:
There are two principal problems with the use of these networks. The first is that there is no current understanding of how big (how many nodes and connections) a network must be in order to tackle a problem of some given complexity (the exception to this being the Hopfield network). The second disadvantage with these networks can be the very long times sometimes needed to teach the network the appropriate responses - these networks learn in a supervised way - input data is fed many times to the network and the connections adjusted so as to try to achieve a target output. This "programming" stage can mean that a given pattern must be presented to the network thousands of times.
A general statement about both these networks can also be made - we get out pretty much what we put in - we decide how the network is to respond and adapt during learning. This is clearly rather different from the brain which "by itself" is able to set up connections between neurons in order to accomplish certain functions - it is said to exhibit self-organization. It is difficult to imagine how the complexity of human thought and consciousness can emerge from anything other than a self-organizing system. We shall discuss these wider issues concerning artificial intelligence later.