Why the humanities are essential to artificial intelligence

We want Australians to be confident and capable users of trustworthy artificial intelligence (AI). The foundations of AI are being laid now, and it’s essential that as a country we apply it in a way that supports our agency, purpose and opportunity. That’s why humanities scholars are at the forefront of providing advice about AI to governments, starting with a rapid research report for the Chief Scientist.

3D rendered artwork of ribbon like shapes

Without even knowing it, you’ve likely benefited from AI. Maybe you’ve asked Siri to start a timer, checked Google Maps for the best route during peak hour traffic or found the cheapest flight using a booking site. 

But the rise of the GPT chat-bot has brought a new wave of AI to the masses that goes beyond simply analysing content and moves to generating it.

What is generative AI?

Whether it’s ChatGPT (funded by Microsoft), Bard (by Google) or Ernie Bot (by Baidu) generative AI works in the same way: you write a prompt or ask a question and in response, you’re given content such as text, images, code or music. But in the back end, there are complex mathematical models at work. 

Generative AI Rapid Research Report

Prepared for the Australian Chief Scientist and released 1 June 2023, the report explores the risks and opportunities associated with generative AI. 

The generative AI system reads your question, references it against the patterns it recognises in the datasets it has been trained on (also known as Large Language Models (LLMs) and Multimodal Models (MFMs)) and produces sophisticated answers based on those patterns. Developers have decided what and how user requests for information should be handled, including whether to label the request appropriate or ethical.

While the predictive text and image generation functionality of these systems is not new, what is novel is the scale of the data used for training them and the extremely large number of parameters in the models.

These systems require stupefyingly large computing power and extensive human resources to train, develop and deploy.

> Read pages 2-3 of the report for a more detailed explanation of technology behind generative AI

The leading role of humanities in AI development

Philosophers, anthropologists, historians and artists have been prominent participants in and contributors to the development of technology throughout the industrial revolution.

And, they continue in that tradition today with the humanities providing the deepest understanding of AI’s impact on humanity – from an ethical, historical, creative and cultural standpoint. For example:

  • Philosophy scholars have long examined questions and provided theoretical underpinning for propositions such as ‘can a machine act intelligently?’, ‘can it solve any problem that a person would solve by thinking?’, and ‘are human intelligence and machine intelligence the same?’
  • Historians and literature professors are gearing up to unlock the potential of AI, with LLMs and MFMs providing unprecedented abilities to rapidly search and make meaning of digital collections (e.g. Australia’s TROVE, collections from museums and galleries, as well as biological, cultural and societal records).
  • Social scientific analysis has been exploring the risks of inaccurate AI outputs, with growing concerns LLMs and MFMs are being trained on biased data sets, leading to perverse outcomes.
  • Linguists are considering the impact on diverse cultures of technologies which are being trained on North American English.

The rapid evolution of generative AI is a watershed moment for society. It was only last month that we saw more than 1,000 AI experts, researchers and backers (including Elon Musk and Steve Wozniak) call for an immediate pause on the creation of ‘giant AI’ to allow a considered approach to the analysis of capabilities, dangers and risks.

Arguably, the humanities have never been more relevant than they are now. That’s why we’re taking a leading role in shaping policy about AI. 

Our Fellows recently co-led the development of the Rapid Response Information Report: Generative AI to help policymakers gain a strong understanding of the risks and opportunities associated with the rapid development of generative AI.

Top takeaways from the report

As our Fellows and colleagues explore the role of generative AI in our daily life and future, it is critical that we maintain an active and informed conversation on how we use this, and other emerging technologies.

We’re calling attention to:

The role of humanities as experts in meaning-making

Humanists are well-equipped to lead the development of AI and education in the age of AI. As we said in our submission to the Universities Accord process, our higher education system will need to see fundamental re-alignment, including the creation of new education programs and new approaches to challenge-based research initiatives that see the humanities represented, on their own terms.

Knowledge disciplines: the rock on which we build informed engagement with AI

We’re seeing a rise in prominence of misinformation and misrepresentation of evidence and there is a danger that LLMs and MFMs will amplify this. This crisis is not a matter that science on its own can address. This is a philosophical matter, and it is a social matter. Its analysis can and should be informed by humanities forms of enquiry.

Ethical creation of generative AI

The ethical creation of generative AI is a consideration for both the model itself and its data. Key considerations include validity and reliability; trust in and accuracy of answers; safety; accountability and transparency; explainability and interpretability; privacy and management of biases.

Social outcomes

Contextual and social risks, including risks to human rights and values arising from AI use in high-stakes contexts (such as law enforcement, health and social services), and risks posed by the more ‘routine’ deployment of AI that reproduces and accelerates existing social inequalities.

More about the report

The Rapid Response Information Report: Generative AI was convened by the Australian Council of Learned Academies, alongside the Australian Academy of Humanities, the Australian Academy of Technological Sciences and Engineering and the Australian Academy of Science.

Lead authors were Distinguished Professor Genevieve Bell FAHA FTSE (The Australian National University), Professor Julian Thomas FAHA (ARC Centre of Excellence for Automated Decision-Making and Society, RMIT University), Professor Jean Burgess FAHA (ARC Centre of Excellence for Automated Decision-Making and Society, Queensland University of Technology) and Professor Shazia Sadiq FTSE (University of Queensland) with input from 24 expert contributors and 7 peer reviewers.

Interested in AI?

54th Annual Academy SymposiumJoin us on 16 and 17 November for our 54th Annual Academy Symposium — Between humans & machines: exploring the pasts and futures of automation as we explore the possibilities and hazards of automation, and the complexities of human-machine relations.

> Find out more

 

Acknowledgement of Country

The Australian Academy of the Humanities recognises Australia’s First Nations Peoples as the traditional owners and custodians of this land, and their continuous connection to country, community and culture.