“Where is the field right now? The first lesson that I want to give you is that we are not nearly as close to strong Artificial Intelligence as many believe,” says Gary Marcus, Professor of Psychology and Neural Science at New York University, told an audience at the AI for Good Global Summit in Geneva, Switzerland yesterday.
Gary Marcus, scientist, bestselling author, entrepreneur and AI contrarian, was CEO and Founder of the machine learning startup Geometric Intelligence, recently acquired by Uber. He has published extensively in fields ranging from human and animal behavior to neuroscience, genetics and artificial intelligence.
“Hallucinations are really part of where we are right now,” says Marcus, making an example of an image-recognition app on his smartphone mistaking a water bottle for a pen.
“Deep learning is good at certain aspects of perception, particularly categorization, but perception is more than categorization and cognition or intelligence is more than just perception. There are many things that go into intelligence … And what we have made real progress on is perception, but the rest of it, we still haven’t made that much progress collectively in the field.”
The power of narrow AI has demonstrated exponential growth in solving constrained problems such as a chess game, but “there are not any data on what I would call strong AI,” says Marcus, illustrating that little measurable progress has been achieved in general-purpose AI.
“It’s a very empirical science without guarantees right now. We have no procedures for reliably building complex cognitive systems yet.” – Gary Marcus, Professor of Psychology and Neural Science at New York University
Marcus also observes “a bias in the field which is to assume that everything is learnt.” Marcus makes the argument that human beings do not learn everything by ‘trial-and-error’ – that part of our knowledge is innate, learnt over evolution – leading him to suggest that “we need more innateness if we are going to build intelligent agents … It’s not learning trial-by-trial in the way that our contemporary machines are.”
As to the debate around whether intellectual property rights and related disputes could restrict access to key AI capabilities, potentially to an extent that could stifle innovation, Marcus fears that this risk will be difficult to mitigate: “In ideal world, AI would be a public good, not owned by one corporation or eight individuals or something like that. But we are headed on path where that is what’s going to happen.”
“If we are going to fulfill the destiny of AI helping in humanitarian organizations, we really want to get to strong AI,” says Marcus. “There is lot we can do now. There is a lot of fruit to be gained in the next ten years.”
With the corporate world looking to commercialize the latest breakthroughs in AI, and research labs tending to work too independently of one another, Marcus concludes that “It may be that no existing approach to AI research can efficiently get us to next-generation AI.”
What would be the approach that could?
“World-changing AI is almost certainly going to require massive, interdisciplinary collaboration,” says Marcus, asking the summit’s participants to look just up the road in Geneva to CERN, the European Organization for Nuclear Research, “a global collaboration with thousands of researchers from over 20 countries, working together, in common cause, building technology and science that could never be constructed in individual labs, tackling problems that industry might otherwise neglect.”
“Maybe we need to have a model like that for AI – global collaboration; lots of people doing AI for the common good.”
Watch Gary Marcus’ talk from 1:02:00 to 1:17:20 in the archived webcast of Plenary 2: Transformations on the Horizon.
Send this to a friend