A wide-eyed look at the theory behind neural networks (a recently resurrected scientific model of the human brain) and the artificial-intelligence programs that the theory has spawned in recent years. Allman, co-editor of Newton at the Bat (YA nonfiction), is at first almost unsettingly enthusiastic about ""connectionist"" scientists and philosophers he profiles and about their controversial theory of how the human brain intuits, generalizes, and learns. Once he finds a calmer pace, though, the author provides a clear if highly simplified explanation of neural-network theory: basically, that our brains do not solve problems in a linear, logical, computer-like fashion. Instead, each of the brain's billions of neurons reacts (or fails to react) to each bit of stimulus in the brain, depending on the strength of the stimulus and on the threshold of the neuron's tolerance. Once stimulated, the neurons discharge their quota of electrical energy, which sets off more reactions, until the system finally settles down to a ""stable"" response. This reasoning by consensus, say the connectionist, is responsible for the creativity and intuitiveness that standard (and mere precise) computers lack. Allman goes on to describe neural-network computer programs that have resulted from this general theory, including NETalk, a program that can teach itself to read aloud through a gradual, human-like recognition of patterns. Possible future neuralnetwork applications include robots that can repair equipment in space, ""smart"" weapons that can recognize underwater targets, and computers that can read handwriting and understand the human voice. The programs may also help theorists learn more about how the neurons of the brain combine to create the human mind. A welcome update on the connectionists' recent accomplishments, and a highly accessible introduction to the continuing debate of brain versus mind.