In this fast-paced digital world where too often discovery, risk and reward overshadow ethical approaches and outcomes, getting the Artificial Intelligence (AI) balance right is one of the biggest challenges facing us today.
And as we begin this fourth industrial revolution it is becoming clear that public trust and community confidence in AI presents by far its biggest challenge. In fact, the entire future of AI will largely depend on the confidence that society places in the technology. For this reason alone, it is timely for Australia to develop its own national AI strategy, a blueprint to guide us through this potential ethical and moral minefield, before the horse has well and truly bolted.
Two years ago, Russia’s President, Vladimir Putin, proclaimed ‘the one who becomes the leader in this sphere will be the ruler of the world.’ The ‘AI race’ is well underway and quickly gathering pace, and Australia risks being left behind.
While the USA, Russia, Canada, India, China, France, Germany, Singapore and Japan are amongst the global leaders in adopting AI strategies, Australia remains without a strategic long-term plan. While others are investing billions in the development of AI technologies, the 2018-19 federal budget allocated just $30 million over four years to promote and guide the development of AI in Australia.
The release recently of an in-depth report into AI by the Australian Council of Learned Academies (ACOLA) is an important and vital step in developing a comprehensive and considered AI strategy. The report, The Effective and Ethical Development of Artificial Intelligence: An Opportunity to Improve our Wellbeing, reveals much about the significant benefits of this new technology for Australia, while also shining a light on the inherent risks and challenges it presents.
The ACOLA report is a pioneering study, addressing a range of issues relating to AI, including its current and future impact on manufacturing, mining, agriculture and the environment. Like anything that is new and rapidly evolving, the growing influence of AI, while exciting as an economic driver, is also creating suspicion in the community and needs to be better understood and trusted.
The issue of trust in AI systems raises many definitional problems, including: trust that the algorithms will produce the desired output; trust in the values underlying the system; trust in the way data in the system is protected and secured; and trust that the system has been developed for the good of all stakeholders. These questions of trust take users far beyond the simple matter of whether they believe the technology works.
Just as a century ago, the automobile needed to earn public trust in a post-Industrial Revolution landscape, in the 21st century, AI is at the very beginning of its journey, where it too needs to build community confidence.
Importantly, the ACOLA report looks at not only the scientific and technological aspects of AI, but explores questions where the humanities can provide insights such as human rights, equity and access to technology. It also considers questions such as the right to work.
Many claim that AI will transform the tasks involved in work, create new roles and establish new skill sets. But the consequences of widespread automation are likely to be different for women and men, with implications for socio-economic equality with the potential to set us back decades in closing the global gender gap.
As the co-author of the ACOLA report, leading ethics expert Professor Neil Levy FAHA, has identified, the broader implementation of AI will require a growth in the number of people with specialised scientific knowledge and skills. But as the scientists themselves well recognise, it will also require and drive demand for expertise to provide the necessary ethical, social and cultural checks and balances. This is where the humanities has a crucial role to play.
For those who think AI will come at a cost for the creatives think again. The humanities are well-placed to support effective AI development and implementation. In fact, AI will almost certainly fail if it doesn’t embrace the humanities as a key element of its continuing evolution. AI will lead to the automation of many tasks that people currently carry out, leaving us to contribute the important creative and the communicative components of 21st century employment.
The humanities has always embraced change and share the excitement at the potential of AI. We also stand ready to play our part in building our own talent base and establishing an adaptable, skilled and highly creative workforce for a future in which AI is the latest technology with the ability to enhance Australia’s wellbeing, lift our economy and improve environmental sustainability.
And when it finally comes time for Australia to develop its own AI strategy, humanities researchers will be best-placed to apply a unique lens; to critique, analyse and understand AI from a different perspective, promoting a future based on a cohesive and ethical culture and society, in the midst of radical and transformative technological change.