You might also like
As artificial intelligence becomes more prevalent in our society, a new research project is considering what threat it could pose to our way of life and the lessons we can learn from similar technology.
Dr Christopher Hobson, from The Australian National University’s Coral Bell School of Pacific Affairs, is researching some of the major safety challenges presented by the further development and use of artificial intelligence (AI).
“At its most extreme, humans risk creating an actor that is more intelligent, one that could dominate and determine our collective fate,” Dr Hobson said.
“But, quite simply, AI doesn’t need to take over the world to pose a threat to it. Long before that, we have to confront any possibility of superintelligence, we need to consider and prepare for a range of less extravagant but equally dangerous scenarios in which the way that AI and related technologies may cause significant harm.”
Dr Hobson has been awarded a one-year grant by the Toshiba International Foundation for a research project, ‘Safely integrating AI into society’. The research will be undertaken in collaboration with Keio University’s Cyber Civilization Research Center, where Dr Hobson is a Visiting Research Fellow.
The project builds on Dr Hobson’s previous work on technological accidents and will also look at what lessons can be taken from how we manage other technological risks and make them tolerably safe. Given current trends, it is clear AI and related technologies will become much more prevalent and widely used in the short to medium term.
The likely consequences of COVID-19 could see the hastening of many predicted changes, such as greater use of AI, big data, and related technologies. This change will require increased consideration of how this transition can be better guided with greater attention given to AI safety and risk management.
“Accidents, the malevolent use by actors, and unintended consequences are some of the other dangers that we need to prepare for” Dr Hobson said.
“There is a need to consider the more banal – but also more likely – way that AI may threaten society.”
While there has been work on the longer-term risks of creating machines that are more intelligent than humans, there are more immediate dangers that come from a greater reliance on machine learning, big data and related technologies, without sufficient safety measures in place.
In exploring these problems, the project will consider what lessons can be taken from the way other technological risks have been managed, most notably nuclear power.
For more information about the project, contact Dr Hobson: email@example.com
Image by Adrian Baer and can be found on Wikimedia.