Artificial neural networks are computers whose architecture is modeled after the brain. They typically consist of many hundreds of simple processing units which are wired together in a complex communication network. Each unit or node is a simplified model of a real neuron which fires (sends off a new signal) if it receives a sufficiently strong input signal from the other nodes to which it is connected. The strength of these connections may be varied in order for the network to perform different tasks corresponding to different patterns of node firing activity. This structure is very different from traditional computers.
The traditional computers that we deal with every day have changed very little since their beginnings in the 1940's. While there have been very significant advances in the speed and size of the silicon-based transistors, which form their basic elements - the hardware, the overall design or architecture has not changed significantly. They still consist of a central processing unit or CPU which executes a rigid set of rules (the program or software) sequentially, reading and writing data from a separate unit - the memory. All the "intelligence" of the machine resides in this set of rules - which are supplied by the human programmer. The usefulness of the computer lies in its vast speed at executing those rules - it is a superb machine but not a mind.
Neural networks are very different - they are composed of many rather feeble processing units which are connected into a network. Their computational power depends on working together on any task - this is sometimes termed parallel processing. There is no central CPU following a logical sequence of rules - indeed there is no set of rules or program. Computation is related to a dynamic process of node firings. This structure then is much closer to the physical workings of the brain and leads to a new type of computer that is rather good at a range of complex tasks.