back to the blog list

SUPERINTELLIGENCE AND THE FUTURE OF HUMANITY

We humans pride ourselves on being the most intelligent creatures on Earth and love the thought of holding the highest rank in the natural order of things. However, with true AI on the way, we have outdone ourselves and created something with the potential to dethrone us.

 

Director of the Future of Humanity Institute in Oxford University, Dr Nick Bostrom gave a keynote at 2018 SHIFT Business Festival, stating in no uncertain terms that we must act now if we want to stop the superintelligence that looms in our future from simply getting rid of us. But what kind of action should we be taking, when as we speak, superintelligence and even its antecedent, the Artificial General Intelligence (AGI), only exist in theory? What’s the rush?

 

Now, it’s hard to imagine that AI in its current state might decide that we humans are a threat to its existence. The algorithms that we have at the moment can help us in many ways to be more productive, efficient, perhaps even more human, and it is yesterday’s news that in certain fields machines already outperform humans. But the superintelligence Bostrom is worried about is quite different from the AI we’re now getting used to. It does not exist yet, but Bostrom theorizes it will.

 

Bostrom defines this next generation of AI as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. He also believes the superintelligence, instead of any threats of an environmental or political nature, will be the thing that most likely wipes us out. That is, if we allow it to grow freely and feed off the internet without setting policies or thinking about where this development may lead.

 

In a report funded by the Ministry of Foreign Affairs of Finland, artificial intelligence is listed as one of the leading existential risks – in other words, risks “that [threaten] the premature extinction of humanity or the permanent and drastic destruction of its potential for desirable future development”. The message of the report is aligned with Bostrom’s thinking: AI will pose a threat to humanity if it begins to consider us a threat to its own development and well-being. Tim Urban also writes about this on his blog.

 

Of course, the future is fundamentally unknown, and Bostrom himself writes in his paper Strategic Implications of Openness in AI Development (2017) that evaluating medium and long-term impacts is complicated. What we may think of as a certainty can, in turn, result in new questions.

 

For example, it would clearly be problematic if AI development continues in silos without any transparency, because this may well result in a Pandora’s box of highly developed machine intelligence without the faintest idea about concepts like death, justice, beauty or happiness, or the emotion and value systems inherent to human thinking.

 

On the other hand, openness might create an arms race where no one thinks beyond outgunning the opposition, and this might cause some parties to take shortcuts, again resulting in an unscrupulous AI that does not understand humans even though it may be considerably smarter in terms of raw brain power, highly capable of processing information and making accurate predictions.

 

The only question is not how we code our values and complex emotional, social and cultural concepts like death into an AI, but also whose values and concepts we use when building AI. Whose value system is fit to act as the parent to the superintelligence?

 

In keeping with the metaphor, we as the adult should also monitor what the fledgeling AI gets to see, as it is capable of teaching itself without any help from us. It may not be enough to decide what kind of a value system we assign to it if it is then allowed to roam the internet unsupervised. Like we mentioned before, AlphaGo has already shown that AI is capable of showing human-like creativity based on observation, trial, and error.

 

The key to combating this potentially harmful development is to lay down some ground rules. If there is a real risk of creating something that may decide to eliminate us to protect itself, we need to establish the best practices that will keep that from happening – before the entity that we are talking about actually emerges. So although superintelligence may still be far in the future, the time to do something about it is now.