I had McCarthy as a professor in the 1980s, just about when it was clear that logic-based AI had hit a wall. McCarthy was determined to put mathematical logic as a formalism under the real world, and it just wasn't working.
He once described the "missionary and cannibals problem" in class. Then he wrote it up in his notation for "circumscription",
cranked the notation, and the answer came out. I thought at the time, as he converted the problem to his notation, "here's where the miracle occurs". That's the problem. Hammering the real world into a formalism is very hard. Cranking the formalism is not difficult to automate.
Mathematical logic works fine where the problem space can be formalized, such as in software proofs of correctness. But trying to get to "common sense reasoning" that way, which was McCarthy's goal, just didn't work. It was a great disappointment to him.
AI as a field has suffered from a repeated problem - believing that the next big development will lead to strong AI. Graph search, the General Problem Solver, expert systems, and big databases of knowledge have all been touted as the solution. In each case, the ceiling of what could be done rose a bit. Machine learning isn't a panacea, but it, too, has raised the ceiling of what can be done.
(It's personally frustrating that much of my career was during the "AI winter", from about 1985 to 2005. Expert systems had failed, and machine learning didn't work yet. I spent a lot of time trying to find AI in control theory. Combining machine learning and control theory is at last working out, which is why autonomous drones are now so agile. But it happened 15 years later than I'd hoped.)
> as he converted the problem to his notation, "here's where the miracle occurs"
Exactly what I see to be the future of all types of engineering, computer control, and interaction - e.g. programming languages, circuit engineering, database query languages, all types of problem solving.
The future of all of these is just to very directly describe to the computer the problem you want to solve, in a way that is a very direct representation of the way you think through the problem, with nearly zero translation between the way we approach the problem and the way we communicate the problem to the computer.
To me, this perfectly describes the evolution from C/Java to Perl to Python. The shift from highly relational SQL to Document-based noSQL databases. The shift from bladed weapons, to point & shoot guns. The way you interact with a doctor or future doctor AI, and it will infer from your layman description what you're really communicating and then synthesizes that into a possibly multi-iteration search strategy for solving the problem.
Today, the ability of tech to achieve this still holds us back substantially. Over time though, we'll continue to converge to a way of communicating our creations and desires to computers, using some kind of language or interaction that requires us to do no intermediate translation. In many places, I think we're not actually that far off from this, for some tasks. Over time, our interaction methods won't get more strange and hard to learn, but strangely more familiar and natural.
IMHO too much hubris about how we reason logically and formally led this top-down approach.
"
Our opinion, and that of the knowledge representation community, is that it is better to provide computer programs with common sense concepts, suitably formalized
" - McCarthy.
The recent successes with deep neural nets arriving at seeming reason, or a statistical approximation to it, by complex hierarchical statistical inference, bottom up from the real data.
Word2Vec suggests that neural networks come up with different representations of the data than symbolic ai but ones that have much utility semantically.
High Level logic, semantics and representations become as transformations and vector mathematics in the word2vec context space.
Symbolic AI was probably the wrong way to look for strong general AI because that is not the way meat brains really reason.
High level logic is often a justification or proof of something that jumped into ones thought intuitively as if from nowhere.
McCarthy's work is reasoning about reasoning, real reasoning as practiced by animals is probably more like the sort of heirarchical, fuzzy, stochastic society of processes neural nets use to jump to conclusions.
Both Turing and von Neumann saw that mechanical computing could be performed in two distinct ways either logically and sequentially or statistical and evolutionary.
All through AI there has been a dichotomy between embodied robotics, evolutionary statistical methods, and raw data versus hand engineered, sanitised, abstracted symbolic logic.
It's well known that machine learning is currently useful for solving some problems and irrelevant to others. As is the case with all existing ai algorithms. ML is not strong ai and nobody has ever claimed it was, so it seems to be an odd criticism.
Of course the name in the title should be 'McCarthy'. Also, I think that the rest of the title is misleading; the paper seems to be about what McCarthy sees as the challenges to the subject, rather than about his personally challenging the subject.
He once described the "missionary and cannibals problem" in class. Then he wrote it up in his notation for "circumscription", cranked the notation, and the answer came out. I thought at the time, as he converted the problem to his notation, "here's where the miracle occurs". That's the problem. Hammering the real world into a formalism is very hard. Cranking the formalism is not difficult to automate.
Mathematical logic works fine where the problem space can be formalized, such as in software proofs of correctness. But trying to get to "common sense reasoning" that way, which was McCarthy's goal, just didn't work. It was a great disappointment to him.
AI as a field has suffered from a repeated problem - believing that the next big development will lead to strong AI. Graph search, the General Problem Solver, expert systems, and big databases of knowledge have all been touted as the solution. In each case, the ceiling of what could be done rose a bit. Machine learning isn't a panacea, but it, too, has raised the ceiling of what can be done.
(It's personally frustrating that much of my career was during the "AI winter", from about 1985 to 2005. Expert systems had failed, and machine learning didn't work yet. I spent a lot of time trying to find AI in control theory. Combining machine learning and control theory is at last working out, which is why autonomous drones are now so agile. But it happened 15 years later than I'd hoped.)