In 1950, Turing predicted computer programs bright enough to fool a human by the year 2000; today, scientists consider 300 years from now a more likely estimate. This look at the last 30 years of formal research in artificial intelligence acquaints readers with the host of discoveries and ingenious tricks that have led to more sophisticated software, increasingly complex hardware, and some of the findings about intelligence and thinking that have changed those estimates so quickly. Computers work in a linear and hierarchical fashion; the human brain often does not. The author describes the kinds of problems that computers can and cannot handle well--electronic speed can't mimic intuition, or use human language in a realistic way. Patent also describes many applications (in games, robotics, medical data processing, etc.) for programs which have already been designed, but concludes that despite worldwide ""fifth generation"" projects aimed at radical new computer structures, creativity and understanding are qualities that will remain exclusively human for the foreseeable future. Notes and annotated bibliography are current to mid-1985. This evenhanded treatment of both the technical and philosophical aspects of this new science should be welcomed.