July 18, 2017 The future of artificial intelligence: two experts disagree

This article is part of a series called Science by Compass.

Read the series >

Artificial intelligence (AI) promises to revolutionise our lives, drive our cars, diagnose our health problems, and lead us into a new future where thinking machines do things that we’re yet to imagine.

Or does it? Not everyone agrees.

Even billionaire entrepreneur Elon Musk, who admits he has access to some of the most cutting-edge AI, said recently that without some regulation:

“AI is a fundamental risk to the existence of human civilization”.

So what is the future of AI? Michael Milford and Peter Stratton are both heavily involved in AI research and they have different views on how it will impact on our lives in the future.


How widespread is artificial intelligence today?


Answering this question depends on what you consider to be “artificial intelligence”.

Basic machine learning algorithms underpin many technologies that we interact with in our everyday lives – voice recognition, face recognition – but are application-specific and can only do one very specific defined task (and not always well).

More capable AI – what we might consider as being somewhat smart – is only now becoming widespread in areas such as online retail and marketing, smartphones, assistive car systems and service robots such as robotic vacuum cleaners.


The most obvious and useful examples of current AI are the speech recognition on your phone, and search engines such as Google. There is also IBM’s Watson, which in 2011 beat human champion players at the US TV game show Jeopardy, and is now being trialled in business and healthcare.

Most recently, Google’s DeepMind AI called AlphaGo beat the world champion Go player, surprising a lot of people – especially since Go is an extremely complex game, way surpassing chess.


What major advances in AI will we see over the next 10 years?


Many auto manufacturers and research institutions are competing to create practical driverless cars for general road use. While currently these cars can drive themselves for much of the time, many challenges remain in dealing with bad weather (heavy rain, fog and snow) and random real-world events such as roadworks, accidents and other blockages.

These incidents often require some degree of human judgement, common sense and even calculated risk to successfully navigate through. We are still a long way from fully autonomous vehicles that don’t need a licensed driver ready to take control in an instant.

The same can be said for all the AI that we will see over the coming 10-20 years, such as online virtual personal assistants, accountants, legal and financial advisers, doctors and even physical shop-bots, museum guides, cleaners and security guards.

They will be advanced tools that are very useful in specific situations, but they will never fully replace people because they will have little common sense (probably none, in fact).


We will definitely see a range of steady, incremental improvements in everyday AI. Online product recommendations will get better, your phone or car will understand your voice increasingly well and your vacuum cleaner robot won’t get stuck as often.

It’s likely that we’ll see some major advances beyond today’s technology in some but not all of the following areas: self-driving cars, healthcare, utilities (electricity, water, and so on) management, legal, and service areas such as cleaning robots.

I disagree on self-driving cars – there’s no real reason why there won’t be fully autonomous controlled ride-sharing fleets in the affluent centres of cities, and this is indeed the strategy of companies such as NuTonomy, working in Singapore and Boston.


Will Skynet/the machines take over and enslave humanity?


It’s unlikely in the near future but possible. The real danger is the unpredictability. Skynet-like killer cyborgs as featured in the Terminator film series are unlikely because that development cycle takes a while, and we have multiple opportunities to stop development.

But AI could destroy or damage humanity in other unpredictable ways. For example, when big companies like Google Deepmind start entering into healthcare, it’s likely that they will improve patient outcomes through a combination of big data and intelligent systems.

One of the temptations or pressures will be to deploy these extremely complex systems before we completely understand every possible ramification. Imagine the pressure if there is good evidence it will save thousands of lives per year.

As we well know, we have a long history of negative unintended consequences with new technology that we didn’t fully understand.

In a far-fetched but not impossible healthcare scenario, deploying AI may lead to catastrophic outcomes – a world-wide AI network deciding in ways invisible to us human observers to kill us all off to optimise some misguided performance goal.

The challenge is that with newly developing technologies, there is an illusion of 100% control, which doesn’t really exist.


All our current AI, and any that we can possibly create in the foreseeable future, are just tools – developed for specific jobs and totally useless outside of the exact duties they were designed for. They don’t have thoughts or feelings. These AIs are just as likely to try to take over the world as your Xbox or your toaster.

One day, I believe, we will build machines that rival us in intelligence, and these machines will have their own thoughts and possibly learn in an unconstrained way. This sounds scary. But humans are dangerous for exactly the reasons that the machines won’t be.

Humans evolved in a constant struggle for life and death, which made us innately competitive and potentially treacherous. When we build the machines, we can instead build them with any underlying motivation that we would like.

For example, we could build an intelligent machine whose only desire is to dismantle itself. Or, we could build in a hidden remote-controlled off switch that is completely separate from any of the machine’s own circuits, and an auto-shutdown reflex if the machine somehow ever notices it.

All these safeguards will be trivial to implement. So there is simply no way that we could accidentally build a machine that then tries to wipe out the human race.

Of course, because humans themselves are dangerous, someone could build a machine that doesn’t have these safeguards and use it for nefarious purposes. But we have that same problem now with nuclear weapons.

In the future, just as now, we have to hope that we are simply smart enough to use our technology wisely.

This article is part of a series called Science by Compass.

Read the series >

Peter Stratton and Michael Milford
are academics from The University of Queensland and Queensland University of Technology
Content Partner

This article was originally published on The Conversation. Read the original article.