Elon Musk’s OpenAI: Artificial Intelligence vs. Real Idealism

Nitya Mallikarjun
The Startup
Published in
3 min readMay 4, 2016

--

Sometimes I think Elon Musk​ may single-handedly be increasing the coolness quotient of the entire human race.

I mean seriously, the man is running three companies poised to change the course of entire industries and he has time to take step back and think “alright, guess I’ll save humanity while I’m at it too, I guess”

Last week, OpenAI released OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms. Today, with the power or control to develop the next generation of AI technology clearly in the hands of an elite group of companies whose motives may be driven by corporate interests and financial gains, I see the motive behind Elon Musk and Sam Altman’s OpenAI venture. It is an attempt to put checks and bounds in place to “save us from ourselves”, so to speak.

What exactly do we need saving from, anyways?

Let’s step back from the world of artificial intelligence for a second and think about a biological process we all are familiar with — evolution. Evolution really is a remarkable thing, whether it’s in man or machine (or software really, because that’s the artificial intelligence we are talking about here). If evolution was a conscious and sentient being, he or she would not know at this point in time what the final product he or she was going to end up being (hey, kind of like all of us). Evolution — goes on. Our intelligence, our consciousness, our creativity, our present world itself are all products of our evolution. We cannot even precisely predict our own world five or ten years into the future, because everything we’ve learned in the past and learn today contributes to this collective evolution of mankind. To me that’s what the debate on artificial intelligence is — whether we should let anything that does not exist in nature have that kind of power over itself and the choice to choose it’s destiny.

So as you can imagine, we’ve taken the first steps into a world where artificial intelligence will co-exist with us and eventually evolve beyond learning which one of your friends is in the photograph you just uploaded to Facebook, or suggesting where you might be best likely to enjoy dinner tonight based on your past preferences. As artificial intelligence software grows smarter, bigger, better, someone needs to very consciously think of how it itself will learn, and make decisions and how it can eventually evolve into something uncontrollable and unpredictable (again, kind of like ourselves, right? Jeez, artificial intelligence is starting to sound a lot like regular intelligence). I feel that is the real problem being solved here — the evolution of artificial intelligence systems when left to their own device. With OpenAI, Elon Musk and a group of other select technopreneurs hope that this evolution will be nudged to a direction that is in line with greater good for humanity.

But is removing profit from the equation and creating a system of open information and standards a real, long-term sustainable solution to the problem of artificial intelligence evolution, or a message to the others who have the means to build technology that has that kind of power over itself? I feel like it’s a little bit of both. The challenge will not only be to tackle “our greatest existential threat” as Musk has called artificial intelligence, but to uphold the same ideals and intentions through the decades (or centuries?) as it is confronted with even more unpredictable forces in an unknown future. After all, it is our disparate and conflicting views on what’s best for humanity that has even lead up to the need for a venture like OpenAI. So in addition to it’s goal to develop AI systems that are not driven solely by short-term financial gains, OpenAI also has a responsibility towards helping develop that common understanding of what’s best for humanity in the first place so it’s own efforts are not exploited in the future.

OpenAI has strongly encouraged researchers to publish their work (papers, blog posts, code, patents etc). As they also seek to collaborate with on research and in deploying new technologies, I hope these partnerships will be fruitful in developing that common understanding of what’s best for humanity, and in uniting technology leaders in their real idealism as they work towards a future with artificial intelligence.

Originally posted on LinkedIn

--

--