by Cade Metz: Elon Musk AND Sam Altman worry that artificial intelligence will take over the world…


So, the two entrepreneurs are creating a billion-dollar not-for-profit company that will maximize the power of AI—and then share it with anyone who wants it.

At least, this is the message that Musk, the founder of electric car company Tesla Motors, and Altman, the president of startup incubator Y Combinator, delivered in announcing their new endeavor, an unprecedented outfit called Open AI. In an interview with Steven Levy of Backchannel, timed to the company’s launch, Altman said they expect this decades-long project to surpass human intelligence. But they believe that any risks will be mitigated because the technology will be “usable by everyone instead of usable by, say, just Google.”

If OpenAI stays true to its mission, it will act as a check on powerful companies like Google and Facebook.

Naturally, Levy asked whether their plan to freely share this technology would actually empower bad actors, if they would end up giving state-of-the-art AI to the Dr. Evils of the world. But they played down this risk. They feel that the power of the many will outweigh the power of the few. “Just like humans protect against Dr. Evil by the fact that most humans are good, and the collective force of humanity can contain the bad elements,” said Altman, “we think its far more likely that many, many AIs, will work to stop the occasional bad actors.”

It’ll be years before we know if this counterintuitive argument holds up. Super-human artificial intelligence is an awfully long way away, if it arrives at all. “It’s not yet an open-and-shut argument,” says Miles Brundage, a PhD student at the Arizona State University who deals in the human and social dimensions of science and technology, says of OpenAI. “At the point where we are today, no AI system is at all capable of taking over the world—and won’t be for the foreseeable future.”

But in the creation of OpenAI, there are more forces at work than just the possibility of super-human intelligence achieving world domination. In the shorter term, OpenAI can directly benefit Musk and Altman and their companies (Y Combinator backed such unicorns as Airbnb, Dropbox, and Stripe). After luring top AI researchers from companies like Google and setting them up at OpenAI, the two entrepreneurs can access ideas they couldn’t get their hands on before. And in pooling online data from their respective companies as they’ve promised to, they’ll have the means to realize those ideas. Nowadays, one key to advancing AI is engineering talent, and the other is data.

If OpenAI stays true to its mission of giving everyone access to new ideas, it will at least serve as a check on powerful companies like Google and Facebook. With Musk, Altman, and others pumping more than a billion dollars into the venture, OpenAI is showing how the very notion of competition has changed in recent years. Increasingly, companies and entrepreneurs and investors are hoping to compete with rivals by giving away their technologies. Talk about counterintuitive.

The Advantages of Open

OpenAI is the culmination of an extremely magnanimous month in the world of artificial intelligence. In early November, Google open sourced (part of) the software engine that drives its AI servicesdeep learning technologies that have proven enormously adept at identifying images, recognizing spoken words, translating languages, and understanding natural language. And just before the unveiling of OpenAI, Facebook open sourced the designs for the computer server it built to run its own deep learning services, which tackle many of the same tasks as Google’s tech. Now, OpenAI is vowing to share everything it builds—and a big focus seems to be, well, deep learning.

Yes, such sharing is a way of competing. If a company like Google or Facebook openly shares software or hardware designs, it can accelerate the progress of AI as a whole. And that, ultimately, advances their own interests as well. For one, as larger community improves these open source technologies, Google and Facebook can push the improvement back into their own businesses. But open sourcing also is a way of recruiting and retaining talent. In the field of deep learning in particular, researchers—many of whom come from academia—are very much attracted to the idea of openly sharing their work, of benefiting as many people as possible. “It is certainly a competitive advantage when it comes to hiring researchers,” Altman tells WIRED. “The people we hired … love the fact that [OpenAI is] open and they can share their work.”

‘Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid.’CHRIS NICHOLSON, SKYMIND

This competition may be more direct than it might seem. We can’t help but think that Google open sourced its AI engine, TensorFlow, because it knew OpenAI was on the way—and that Facebook shared its Big Sur server design as an answer to both Google and OpenAI. Facebook says this was not the case. Google didn’t immediately respond to a request for comment. And Altman declines to speculate. But he does say that Google knew OpenAI was coming. How could it not? The project nabbed Ilya Sutskever, one of its top AI researchers.

That doesn’t diminish the value of Google’s open source project. Whatever the company’s motives, the code is available to everyone to use as they see fit. But it’s worth remembering that, in today’s world, giving away tech is about more than magnanimity. The deep learning community is relatively small, and all of these companies are vying for the talent that can help them take advantage of this extremely powerful technology. They want to share, but they also want to win. They may release some of their secret sauce, but not all. Open source will accelerate the progress of AI, but as this happens, it’s important that no one company or technology becomes too powerful. That’s why OpenAI is such a meaningful idea.

His Own Apollo Program

You can also bet that, on some level, Musk too sees sharing as a way of winning. “As you know, I’ve had some concerns about AI for some time,” he told Backchannel. And certainly,his public fretting over the threat of an AI apocalypse is well known. But he also runs Tesla, which stands to benefit from the sort of technology OpenAI will develop. Like Google, Tesla is building self-driving cars, which can benefit from deep learning in enormous ways.

Deep learning relies on what are called neural networks, vast networks of software and hardware that approximate the web of neurons in the human brain. Feed enough photos of a cat into a neural net, and it can learn to recognize a cat. Feed it enough human dialogue, and it can learn to carry on a conversation. Feed it enough data on what cars encounter while driving down the road and how drivers react, and it can learn to drive.

“It’s probably better in some dimensions and worse in others,” says Chris Nicholson, the CEO of deep learning startup called Skymind, which was recently accepted into the Y Combinator program. “I’m sure Airbnb has great housing data that Google can’t touch.”

Musk was an early investor in a company called DeepMind—a UK-based outfit that describes itself as “an Apollo program for AI.” And this investment gave him a window into how this remarkable technology was developing. But then Google bought DeepMind, and that window closed. Now, Musk has started his own Apollo program. He once again has the inside track. And OpenAI’s other investors are in a similar position, including Amazon, an Internet giant trails Google and Facebook in the race to AI.

Pessimistic Optimists

But, no, this doesn’t diminish the value of Musk’s open source project. He may have selfish as well as altruistic motives. But the end result is still enormously beneficial to the wider world of AI. In sharing its tech with the world, OpenAI will nudge Google, Facebook, and others to do so as well—if it hasn’t already. That’s good for Tesla and all those Y Combinator companies. But it’s also good for everyone that’s interested in using AI.

Of course, in sharing its tech, OpenAI will also provide new ammunition to Google and Facebook. And Dr. Evil, wherever he may lurk. He can feed anything OpenAI builds back into his own systems. But the biggest concern isn’t necessarily that Dr. Evil will turn this tech loose on the world. It’s that the tech will turn itself loose on the world. Deep learning won’t stop at self-driving cars and natural language understanding. Top researchers believe that, given the right mix of data and algorithms, its understanding can extend to what humans call common sense. It could even extend to super-human intelligence.

“The fear is of a super-intelligence that recursively improves itself, reaches an escape velocity, and becomes orders of magnitude smarter than any human could ever hope to be,” Nicholson says. “That’s a long ways away. And some people think it might not happen. But if it did, that will be scary.”

This is what Musk and Altman are trying to fight. “Developing and enabling and enriching with technology protects people,” Altman tells us. “Doing this is the best way to protect all of us.” But at the same time, they’re shortening the path to super-human intelligence. And though Altman and Musk may believe that giving access to super-human intelligence to everyone will keep any rogue AI in check, the opposite could happen. As Brundage points out: If companies know that everyone is racing towards the latest AI at breakneck speed, they may be less inclined to put safety precautions in place.

How necessary those precautions really are depend, ironically, on how optimistic you are about humanity’s ability to accelerate technological progress. Based on their past successes, Musk and Altman have every reason to believe the arc of progress will keep bending upward. But others aren’t so sure that AI will threaten humanity in the way that Musk and Altman believe it will. “Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid,” Nicholson says.

Either way, the Googles and the Facebooks of the world are rapidly pushing AI towards new horizons. And at least in small ways, OpenAI can help keep them—and everyone else—in check. “I think that Elon and that group can see AI is unstoppable,” Nicholson says, “so all they can hope to do is affect its trajectory.”

Source: Wired