Can AI teach us things about ourselves we are incapable of learning on our own?

Nitya Mallikarjun
5 min readFeb 12, 2018
Art © Nitya Mallikarjun

Unless you just time travelled from a different era or woke up from a really, really long slumber, you’ve probably heard about AI (if you live in silicon valley, you’ve probably heard a LOT about AI).

Really smart people all over the world working for all kinds of organizations are trying to find all sorts of applications of AI. Personal assistants like Siri and Alexa are trying to make your everyday life easier for you, autonomous cars are trying to learn to be better than your oftentimes-distracted self on the roads, some programs are hoping to diagnose your medical problems far better than your kind and friendly doctor with over 20 years of industry experience ever could, datings apps are figuring out how to find your soulmate before you do — the list goes on. P.S, on a side note — if you’re not thinking that nearly every industry is about to get dramatically disrupted by AI in the next X* years, think again.

As an engineer by training and a product manager by profession, I’m interested in the technology behind all of these applications and you’d probably have my attention for a pretty long time as you’re explaining them to me before my mind starts to wander more towards the “philosophical applications” of AI. One of them being — learning something about our own selves we are incapable of learning on our own.

The average middle-schooler (if he or she is paying attention in class) probably knows more about Isaac Newton’s three laws of motion than the entire population of mankind before him. You can find some example or the other of this spanning ideas across generations — every time a new truth has been learned, it becomes a commonplace artifact of knowledge that is available for the current and coming generations. How much people choose to dive into learning more about any of these truths depends on their own personal preferences, interests, and other (often uncontrollable) factors like socio-economic status, cultural or generational environments etc. I think of certain base knowledge as being “seeded” into an individual no matter who (or when) they are based on the factors like above — following which they are on their own to find out how to best use that knowledge for individual or communal good (hopefully).

I find AI today to be working in a similar way. Most of our programs or applications are “seeded” with some existing data (unfortunately, often biased) around a particular behavior — following which they “learn” their way to sometime new. The greater the program or machine’s ability to “learn”, the greater the chances it might find something novel. Recently I started thinking about what that “new” or “novel” truly is. Sure, it’s new and novel for the program itself, but that’s not something we particularly connect to as the program isn’t a conscious or sentient being that really knows what it’s doing (yet). As humans, we’re most impressed when that “new” or “novel” is something we couldn’t have thought of ourselves. That’s why AI is so fascinating to us, it’s a means for us to learn something more than what we’re capable of right now, like a 12-year-old being able to articulate the three laws of motion even before Isaac Newton existed.

AlphaGo (spoiler alerts for the next two paragraphs) is one of the most profound and poignant storied about AI I have come across in recent times. It is a documentary following the story of an AI program developed by Google’s DeepMind to play the ancient Chinese board game of Go as it defeats first the European champion, Fan Hui, then the legendary player Lee Se-dol. Should you choose to watch it (and considering you have some level of interest in AI and it’s applications), you will go through a plethora of emotions that will both confuse and fascinate you.

The AI program and Lee Se-dol play 5 games throughout the tournament, with Lee Se-dol winning one (Game 4) out of the 5 through what became to be called a “divine move” on Se-dol’s part. But I think the origin or inspiration for this “divine move” in game 4 lay in one played by the AI program in game 2. The 37th move in the match’s second game (which AlphaGo wins) was a special one, as the program played a move that surprised everyone including Lee Se-dol. What’s amazing about AlphaGo is that it not only understands how humans play, but it can also look beyond *how* humans play to take the game to an entirely different level. This is what happened with Move 37. AlphaGo calculated that there was a one-in-ten-thousand chance that a human would make that move. It decided to take a chance that was basically non-human in terms of its behavior. And I think that’s what ultimately inspired Lee De-Sol to take a similar chance in game 4 and win against the AI program. The AI program showed a possibility to one of the best players of Go that he himself did not anticipate could exist before coming face-to-face with the program itself.

Lee Se-dol, a genius by all accounts playing not only for himself but for all of humanity, ultimately loses to the AI program in the tournament but declares that he has started to see the game and it’s possibilities in a new light. He finds it beautiful and unexpected and feels it will make his own game better. To me, this was the very significant moment that got me thinking —there are things that AI can teach us about us as humans that we are just incapable of seeing ourselves today, because we’re that 12-year-old before Isaac Newton’s time unaware of the existence of the three laws of motion. What is today so far-reaching and incredulous to us that we will take for granted as a commonplace artifact of knowledge tomorrow? Maybe AI has an answer for us although we may not even know the question yet.

As someone interested in the philosophy of AI, I’m also deeply aware of some of the ethical dilemmas of AI. I have written about them myself in the past. But as someone passionate about learning new things both for individual and communal good, I’m feeling somewhat hopeful at the moment.

*X, unfortunately, is unknown ;)

--

--