WIRED has published an interview with President Obama, to appear in the November issue. “Barack Obama, Neural Nets, Self-Driving Cars, and The Future of the World.” Right away, I thought about Apple’s Siri when I spotted a noteworthy comment by Mr. Obama:
The way I’ve been thinking about the regulatory structure as AI emerges is that, early in a technology, a thousand flowers should bloom. And the government should add a relatively light touch, investing heavily in research and making sure there’s a conversation between basic research and applied research. As technologies emerge and mature, then figuring out how they get incorporated into existing regulatory structures becomes a tougher problem, and the government needs to be involved a little bit more. Not always to force the new technology into the square peg that exists but to make sure the regulations reflect a broad base set of values. Otherwise, we may find that it’s disadvantaging certain people or certain groups.
That may be the key question of the 21st century. When is the right time to step in and put AI’s into the framework of human morality, oversight, and government regulations so that powerful corporations don’t use AIs to oppress or exploit other humans?
Today, one dominant meme is to let businesses flourish. It’s part of American economic strength. On the other hand, there is also a need to provide benign regulation that ensures businesses don’t run roughshod over citizens. This constant tension between the two camps is good because both free enterprise and sensible regulation both play a role in the health of the economy and its people.
A major problem here with AI is that a time might come when AI’s are able to learn and teach themselves faster than humans can manage them. As President Obama suggested, AI’s that aren’t properly constrained and regulated could be unleashed on unsuspecting citizens in an out of control avalanche.
If that sounds unrealistic, I refer you to this: “Artificial intelligence-powered malware is coming, and it’s going to be terrifying.”
Today, many government leaders are having a tough time understanding and coping with the nuances of science, technology and climate. The government leaders of the future will face exponentially more dangerous problems and regulatory issues, and it’ll start with Artificial Intelligence entities.
Apple and Siri vs. the World
What role do we expect Apple to play here? Is that even on the roadmap for Siri? For example, Apple took a leadership position against the FBI last spring on personal privacy and encryption. How, in turn, will Apple deal with the competitive evolution of Siri? Leave it to evolve weakly, and the competition will gobble it up. Make Siri super bright, the ultimate AI, and it could get out of control. Should Apple advise the federal government on regulations that would shackle its own research?
To deal with issues like this, Amazon, Facebook, Google, Microsoft and IBM have formed “The Partnership on Artificial Intelligence to Benefit People and Society,” It’s described here: “Tech Giants Team Up To Tackle The Ethics Of Artificial Intelligence.” Good stuff.
As of this writing, Apple still hasn’t joined.
Next page: The Tech News Debris for the Week of October 10th. Apple + Intel = Excedrin headache.