AI: It's OK Again!

Over the last half century, AI has had its ups and down. But for now, it's on the rise again.


September 05, 2007
URL:http://drdobbs.com/architecture-and-design/ai-its-ok-again/201804174

In 1956, John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon organized the Dartmouth Summer Research Conference on Artificial Intelligence. The conference proposal said:

An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Fifty-one years later, if we go by Jonathan Schaeffer and his team of researchers at the University of Alberta (www.cs.ualberta.ca/~chinook), we know which player—man or machine—has a win in checkers.

Okay, that wasn't fair. It took more than a summer, but significant advances have been made on all of the targeted problems. Despite a seemingly genetic propensity to overpromise, the field of artificial intelligence has accomplished a lot in the past five decades. On the occasion of the 22nd annual AAAI conference this past July, we thought it appropriate to reflect on AI's 51-year history and check in with some experts about the state of AI in 2007.

Hype and History

When, in 1981, Avron Barr, Edward Feigenbaum, and Paul Cohen published their mutivolume The Handbook of Artificial Intelligence, they defined the field as "the part of computer science concerned with designing intelligent computer systems, that is, systems that exhibit the characteristics we associate with intelligence in human behavior"; Dennis Merritt's definition, "the art and science of making computers do interesting things that are not in their nature," articulated in Dr. Dobb's AI Expert Newsletter, is more or less the same idea minus some of the anthrocentism. John McCarthy, who has creator's rights to the field, defined it as "the science and engineering of making intelligent machines."

Any of these definitions would do fine to capture the field today, based on the kinds of research presented this July.

Back in 1981 Feigenbaum et al. reckoned that AI was already 25-years old, dating it to that Dartmouth conference. By age 25, AI was a gangly and arrogant youth, yearning for a maturity that was nowhere evident. If in 1956 the themes were natural language processing, abstractions/concepts, problem-solving, and machine learning, by 1981 the focus wasn't that different: Natural-language processing, cognitive models/logic, planning and problem solving, vision (in robotics), and machine learning; plus core methods of search and knowledge representation. Feigenbaum et al. showcased applications in medicine, chemistry, education, and other sciences.

That was the heyday of AI. "The early 1980s were... the last opportunity to survey the whole field.... AI was already growing rapidly, like our ubiquitous search trees..." (Barr, Cohen, and Feigenbaum, The Handbook of Artificial Intelligence, Volume 4, 1989.) And largely that meant it was the heyday of the symbolist approach.

Analysis and Synthesis

In the same year that Feigenbaum et al. were publishing The Handbook of Artificial Intelligence, G.E. Hinton and J.A. Anderson came out with their Parallel Models of Associative Memory and David Rumelhart and James McClelland, joined by Hinton, started work on a project that resulted in the two-volume Parallel Distributed Processing.

If The Handbook was the handbook of GOFAI ("good old-fashioned artificial intelligence," the attempt to model human intelligence at the symbolic level), then Parallel Distributed Processing was the handbook of connectionism. Symbolism and connectionism have been competing themes in AI work throughout its history.

The Handbook of Artificial Intelligence, though, shone its light on AI successes, which then were in the symbolist tradition, mostly expert systems, models of specific subject-matter domains embodying domain-specific rules and an inference engine through which the system can draw conclusions based on the rules. MYCIN, Eward Shortliffe's medical-advice program, is a good example. Implemented in the mid-1970s, MYCIN engaged in a dialog with a doctor about a patient to assemble information on the basis of which to suggest a diagnosis and recommended treatment. Its advice compared favorably with that of domain experts in several disease domains.

Expert systems represent only an early example of the symbolist approach. The logic of that approach is that, "since the phenomenon of interest is human symbolic reasoning, we should be modeling at that level, both in order to succeed and in order to understand our success—to understand how human brains work once we have a working AI system," according to Larry Yaeger of Indiana University. Marvin Minsky, Douglas Hofstadter, and Douglas Lenat are among those promulgating the symbolist view today. (Although Hofstadter, whose work on fluid concepts seems squarely in the symbolist tradition, says he hasn't read any AI journals in the past 10 to 15 years. "I just pursue my own goals and ignore just about everyone and everything else," he says. And that is in itself a comment on the state of AI today.)

Today, "the symbolic paradigm...has turned out to be a dead end," Terry Winograd says.

That seems harsh, given that many presentations at AAAI were arguably in the symbolist tradition. There was a whole track on AI and the Web, much of which dealt with Web 3.0 issues like ontologies and semantic descriptions.

Some of those seem pretty intelligent. "It's amazing how intelligent a computer program can seem to be when all it's doing is following a few simple rules...within a limited universe of discourse," says Don Woods, who, as the creator of the classic game Adventure, showed the world how to do just that.

But the limited universe of discourse is the problem. We tend to regard brittleness at the edge of domains to be evidence of lack of intelligence. "[E]xplain your symptoms in terms of drops rather than drips," says Yaeger, and "the best medical diagnosis software...won't have a clue."

Maybe a bigger universe of discourse is the answer? With more intelligence built into the universe itself? MIT's Rodney Brooks thinks that's important: "We have reached a new threshold in AI brought about by the massive amount of mineable data on the web and the immense amount of computer power in our PCs."

James Hendler points to "an early wave of Web 3.0 applications now starting to hit the Web," and sees big opportunities in nontext search. "Wouldn't it be nice if you could ask a future Google to recommend some potential friends for your MySpace links?" Hendler, Tim Berners-Lee, and Ora Lassila wrote the defining article on the Semantic Web (www.w3.org/2001/sw), and while Berners-Lee says the Semantic Web is not AI, it is tempting to see it as the ultimate AI knowledge base.

Or maybe that would be Doug Lenat's Cyc project (www.cyc.com). "It started with the goal of entering an entire encyclopedia's knowledge into the computer, but extending every entry so that all underlying assumptions—all common sense and background knowledge—[were] also entered," Yaeger says. Cyc has evolved in its goals, but "[i]f there's any hope of making GOFAI work...Cyc seems like its best hope."

But Brooks cautions, "we still have great challenges in making these systems as flexible, as deep, and as intellectually resilient as a two-year-old child." Winograd thinks that the symbolist approach will never get there: "In order to build human-like intelligence," he says, "researchers will need to base it on a deep understanding of how real nervous systems are structured and how they operate." Connectionism, it seems, is ascendant.

The word "connectionist" was first used in the context of mental models by D.O. Hebb in 1949, but its influence on AI researchers dates to Rosenblatt's use in his Perceptrons paper in 1958. Minsky and Papert killed the nive perceptron model stone dead in 1969 and more or less interred connectionism along with it, until Parallel Distributed Processing resurrected it in 1987.

"The idea behind connectionism," Yaeger says, "is that key aspects of brain behavior simply cannot be modeled at the symbolic level, and by working closer to the physical system underlying human thought—the brain and its neurons and synapses—we stand both a much greater chance of succeeding at producing AI and of understanding how it relates to real human thought." Yaeger is wholeheartedly in the connectionist camp, and in particular in the tradition spearheaded by John Holland and advanced by Stephen Wolfram and Chris Langton and others, cellular automata and Artificial Life.

The connectionist approach is basically synthesis, or bottom-up, the symbolist approach is analysis, top-down. Both are doubtless necessary. "[S]ymbols-only AI is not enough, [but] subsymbolic perceptual processes are not enough either," Winston says.

Science and Engineering

So what about the engineering side of AI, what about real working systems that solve real problems? There the news seems good.

In terms of real engineering and applied science accomplishments, "[t]he most active and productive strand of AI research today is the application of machine learning techniques to a wide variety of problems," Winograd says, "from web search to finance to understanding the molecular basis of living systems." Work like this, and advances in other areas such as robotics, are taking us in the direction of more intelligent artifacts, "and will lead to a world with many 'somewhat intelligent' systems, which will not converge to human-like intelligence."

Rodney Brooks sees great progress being made in practical systems involving language, vision, search, learning, and navigation, systems that are becoming part of our daily lives. Nils Nilsson took time out from writing a book on the history of AI to share some thoughts on its state today, citing practical results of AI work in adjacent fields like genomics, control engineering, data analysis, medicine and surgery, computer games, and animation.

In a forthcoming book, Hamid Ekbia examines the unique tension between the engineering and science goals of AI:

Artificial Intelligence seeks to do three things at the same time:

1. as an engineering practice, AI seeks to build precise working systems;

2. as a scientific practice, it seeks to explain the human mind and human behavior;

3. as a discursive practice, it seeks to use psychological terms (derived from its scientific practice) to describe what its artifacts (built through the engineering practice) do.

This third practice, which acts like a bridge between the other two, is more subjective than the other two.

And that, he argues, is why the field has such dramatic ups and downs and is so often burdened with over-promising and grandiosity. The gap between AI engineering and AI as a model of intelligence is so large that trying to bridge it almost inevitably leads to assertions that later prove embarrassing. McCarthy said AI was "the science and engineering of making intelligent machines." If that is its hope, maybe it can't escape hype.

Winners and Losers

Right now, the balance in AI work seems to be tipped toward applied over theoretical, and toward the connectionist over the symbolist. But if history is a guide, things could shift back. Another tilt noticeable in the AI work presented at AAAI this summer is modesty over hype. It's something that's been going on since the AI Winter of the '90s that followed the disappointment over the overpromising of the '80s. AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field. "AI has become more important as it has become less conspicuous," Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world." And that note of modesty may be a good thing both for the work and for AI.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.