Moravec’s Paradox and The Chinese Room

Rohit
3 min readApr 23, 2016

One of the most interesting insights of the last few decades, and something Steven Pinker has called the only true insight into AI, is Moravec’s Paradox. Put simply, it says that things we consider to be extremely difficult, which require high level reasoning, such as mathematics or playing chess, seem to be extremely computationally simple. But things we consider extremely simple, going hiking, putting up a shelf, cooking pasta, are incredibly difficult for a computer to do.

One reason behind this might be that the ‘low level skills’ detailed above are all mechanisms honed by evolution over millions of years, and thus at the very pinnacle of its ability. If we consider humans as carriers of a mythical unit of competence, called comps, then it stands to reason that several comps have evolved to the very pinnacle of their potential, while the new comps, for stock trading and algebra, are still at an infantile level. Therefore, if a comp has been evolving over millions of years, it will be far tougher to engineer within an artificial system, compared to something that has been evolving only for a few thousand years.

What this means is that as the AI revolution keeps chugging along, it’s the stock analysts and the consultants and the bankers who are at high risk of being replaced, rather than the cooks and gardeners. What this also means is that the true revolution would happen not when algorithms and computers do our white collar jobs for us, but rather when they get physically embodied and start doing things that even a five year old can do now.

The paradigm shift that’s required to see the world in comp-centric terms rather than human-centric terms also brings the sheer nature of intelligence to light. It no longer simply hinges on the ability of someone to do an IQ test or to play Go, but rather can be visualised as a terrain, with multiple peaks of comps. The ground on which the peaks stand itself moves higher as we evolve, and no matter how much effort it takes for us to move to the apex of a peak, it’s nothing compared to the effort it takes someone to go from zero to the ground level.

So perhaps it’s worth taking a step back and think in terms of a fable. Once there was a wise philosopher who sought to poke a hole in the theory of consciousness, as wise philosophers are wont to do. So he poked and prodded and thought and wrote and came up with a theory that showed once and for all why all these icky machines, with their oils and chips and dumbness, could never think the way he, the wise philosopher could.

So he came up with the Chinese Room. Not an actual chinese room, but a chinese room that existed in his mind. With this fictional chinese room, where a man who spoke no chinese consulted an immense and exhaustive database, and translated english to chinese, he tried to disprove the notion that knowing chinese wasn’t the same as running a program that translates chinese.

So there.

When I think about these fables, I can’t help imagine a world where this kind of idiocy would be damned, rather than encouraged and debated. The simple error of trying to explain the translation process to chinese into ‘a+b=c’ type rules itself seems the height of hubris. After all, if things were that computationally simple, it wouldn’t have been such a major achievement that now Skype can do automatic translations, or Babelfish’s existence.

Philosophy abounds with such questions, which might have seemed intriguing and worth pondering a few centuries ago, but surely by now we have realised that the only true insight here is that incorrect or imperfect phrasing of questions make even the simplest things seem absurd! The fact remains that intelligence seems like magic only because we have not been able to adequately break it down into fundamental components and examine it. But that doesn’t mean that the fundamental components don’t exist — only that it isn’t a trivial exercise.

As we extend our reach into the world of cognition, artificial intelligence or natural, we’re discovering that the complex phenomena we’re uncovering is far more than the translation of rules-based games, such as Chess. It includes the ability to perform incredibly complex tasks in a human-like fashion, like understanding our reading preferences for articles, understanding if an email is unwanted, and, most importantly, translating english to chinese.

--

--