Enabling machines to reason like humans 

Enabling machines to reason like humans

Chess Strategy

In 1956, a group of computer scientists gathered at Dartmouth College to delve into a brand-new topic: artificial intelligence.

The summer rendezvous in the Connecticut River Valley town of Hanover, New Hampshire, served as a springboard for discussions on ways that machines could simulate aspects of human cognition: How can computers use language? Can machines improve themselves? Is randomness a factor in the difference between creative thinking and unimaginative competent thinking?

The underlying assumption was that, in principle, learning and other aspects of human intelligence could be described precisely enough that a machine could be programmed to simulate it.

Principal figures at the Dartmouth conference included such notables as Marvin Minsky, then of Harvard University; Claude Shannon of Bell Laboratories; Nathaniel Rochester of IBM; and Dartmouth's own John McCarthy.

It was McCarthy who put the name "artificial intelligence" to the field of study, just ahead of the conference. With Dartmouth hosting a 50th anniversary conference this month, McCarthy ' now a professor emeritus at Stanford University ' spoke about the early expectations for AI, the accomplishments since then and what remains to be done.

Q: You're credited with coining the term "artificial intelligence" just in time for the 1956 conference. Were you just putting a name to existing ideas, or was it something new that was in the air at that time?

A: Well, I came up with the name when I had to write the proposal to get research support for the conference from the Rockefeller Foundation. And to tell you the truth, the reason for the name is, I was thinking about the participants rather than the funder. What's needed is to figure out good ways of constructing new ideas from old ones.

Claude Shannon and I had done this book called Automata Studies, and I had felt that not enough of the papers that were submitted to it were about artificial intelligence, so I thought I would try to think of some name that would nail the flag to the mast.

And looking back, do you think that that's the right term? It seems fairly self-evident, but would there be a better way to describe this kind of research?

Well, there are some people who want to change the name to "computational intelligence"... It seems to me I couldn't have used (that term in 1955) because the idea that computers would be the main vehicle for doing AI was far from unanimous. In fact, it would have been a minority view at that time.

At the time, in that proposal, you had said (about using computers to simulate the higher functions of the brain) that "the major obstacle is not the lack of machine capacity but our inability to write programs taking full advantage of what we have". So the machinery was there, but the programming skills weren't?

It wasn't a question of skills, it was a question of basic ideas, and it still is. One of them that comes up very clearly is when you compare how well computers play chess with how badly they play Go, in spite of comparable effort having been put in. The reason is that in Go, you have to consider the situation, the position... and furthermore, you have to identify the parts ' and how to do that isn't well understood, even yet.

So the attendees in 1956 ' and I'm sure you, too ' were very optimistic about what could be done by, say, the 1970s with chess playing, composing classical music, understanding speech. How far did we get in the 50 years? Were the initial expectations too optimistic?

Mine were, certainly. I think there were some others there who were rather pessimistic.

Well, the thing is, you can only take into account the obstacles that you know about, and we know about more than we knew then.

What are some of the big things that have been learned over the last 50 years that have helped shape research in artificial intelligence?

Well, I suppose one of the big things was the recognition that computers would have to do nonmonotonic reasoning.

In ordinary logical deduction, if, say, you have a sentence P that is deducible from a collection of sentences ' call it A ' and we have another collection of sentences B, which includes all the sentences of A, then it will still be deducible from B because the same proof will work. However, humans do reasoning in which that is not the case. Suppose I said, "Yes, I will be home at 11 o'clock, but I won't be able to take your call". Then the first part, "I will be home at 11 o'clock," ' you would conclude that I could take your call, but then if I added the "but" phrase, then you would not draw that conclusion.

So nonmonotonic reasoning is where you draw a conclusion, which may be a correct conclusion to draw, but it isn't guaranteed to be true...

To the demo>>
Estimate your enterprise's savings with Intel? vPro? technology

ROI Estimator>>
Interview with Hannes Schwaderer, Managing Director of Intel Germany

Return to Main Page

Comments

Add Comment




On This Site

  • About this site
  • Main Page
  • Most Recent Comments
  • Complete Article List
  • Sponsors

Search This Site


Syndicate this blog site

Powered by BlogEasy


Free Blog Hosting