People often ask me if I’m optimistic or pessimistic about the challenges that AI poses. I say that I’m both.
The future rarely unfolds like we expect. Take AI. We fear the rise of our machine overlords. And Hollywood has given us plenty of technological tropes to worry about. You will know them well.
There is the terrifying T-800 robot played by Arnold Schwarzenegger in the Terminator movies that is engaged in a war against humans. There is the Tyrell Corporation Nexus-6 replicant robot in Blade Runner, fleeing Harrison Ford after a bloody off-world robotic mutiny. There is Harlan, the rogue AI in the recent Netflix movie Atlas which, being programmed to save humanity from risk, seeks to destroy humanity itself based on our history of destructive behaviour.
My personal favourite is HAL 9000, the sentient computer in 2001: A Space Odyssey. HAL talks, plays chess, runs the space station, and has murderous intent. HAL voices one of the most famous lines ever said by a computer: “I’m sorry, Dave. I’m afraid I can’t do that.” Why is it that the AI is always trying to kill us?
Our artificially intelligent future, which is rapidly arriving, is none of these conscious robots with lethal ambition. Our AI future is much more mundane and much more insidious.
This should not be a surprise. Hints of this future were predicted very early in the day. In 1909, for example, in a short story titled The Machine Stops, the great EM Forster painted a prophetic picture of our digital future. The story gets a lot right. It predicts globalisation, the internet, videoconferencing and many other aspects of our current digital reality, from more than a century ago. It is a haunting tale of a high-tech haven that hurtles towards a horrifying bloody halt. Without noticing, humans in the story become so dependent on the technology mediating their society that society itself breaks when the machines do.
The marvellous Carl Sagan from just 30 years ago, in his book The Demon-Haunted World: Science as a Candle in the Dark, ought to be required reading for every politician, educator and concerned citizen today. Back in 1995, Sagan prophetically wrote:
I have a foreboding of an America in my children’s or grandchildren’s time—when the United States is a service and information economy; when nearly all the key manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what’s true, we slide, almost without noticing, back into superstition and darkness. The dumbing down of America is most evident in the slow decay of substantive content in the enormously influential media, the 30-second sound bites (now down to 10 seconds or less), the lowest common denominator programming, credulous presentations on pseudoscience and superstition, but especially a kind of celebration of ignorance.
This future has, it seems, now arrived. Not just in America, but in Australia and indeed every other Western democracy. Every one of Sagan’s predictions has come true.
The night is starting to envelop us. And in that darkness, we have lost trust in many of the institutions which make up our society.
The questions that keep me awake at night
We have lost trust in government and politics. In the Australian Public Service Commission’s 2023 survey of Trust and Satisfaction in Australian Democracy, three-quarters of Australians did not trust their political parties. And more than half did not trust government itself, state or federal.
We have lost trust in journalism. In the same survey, only a third of our population trusted newspapers and only half trusted the public broadcaster. Indeed, most of us have switched off broadcast television altogether, with audiences down and ever aging.
We have also lost trust in other important societal glues, such as religion. Just one in five of us still goes to the church, temple, mosque or synagogue. This is a dramatic drop. Back in the 1950s, around half of Australians attended church regularly.
We have even lost trust in science. Only 61 per cent of the population polled in the Trust and Satisfaction survey trusted scientific reports. Scientific truth is now often in dispute. From climate change to COVID, the public is overly sceptical. Scientists like me are increasingly afraid to put our heads above the parapet for fear of the incoming projectiles.
What is causing this loss of trust? And what impact is this having on our society? These are questions that are starting to keep me awake at night. Especially as I see the way that my field, AI, is feeding into this loss of trust.
How society is fragmenting
It’s hard to be sure about the causes and the impact of this distrust since society is, after all, a bit of an amorphous concept. But perhaps the biggest threat to society — to the values that many of us hold dear, like democracy, equity and freedom of speech — is that society itself is fragmenting. And it is hard not to identify many technological drivers that are contributing to that fragmentation.
Identifying these drivers is a start. For we can only hope to nurture society if we identify the forces being corralled against it.
First, and I suspect foremost, is social media. The biggest ruse, of course, was calling it “social media” when, in fact, it might be more accurately called “anti-social media”. It was supposed to connect us, but rather than do that, it polarises and drives us apart.
The other issue with the name “social media” is that, despite media being the other half of the name, we bizarrely don’t consider social media to be part of the media. Yet, for many people, social media is their main source of news. Facebook is arguably the largest news organisation on the planet, even if, for financial and legal reasons, it is now trying frantically to distance itself from this role.
Social media is becoming humanity’s main distraction. It is intermediating many of the relationships we have with each other. The average Australian checks their phone every eight minutes. Half of us say we wouldn’t be able to continue our day-to-day lives if we lost our phones. We are addicted to the dopamine hits that it gives us.
The second technological driver of this breakdown in trust is the demise of broadcasting and its replacement by AI-powered streaming services. What we see now is often the choice of the algorithms.We live in a world of micro targeting where AI algorithms filter what we see.
Around 80 per cent of the content that people watch on Netflix is not what they came to watch but what the algorithms recommend. YouTube has five billion monthly active users, and more than a billion hours of YouTube content are watched around the world every day. The content we watch is no longer broadcast. It is narrowcast.
Audiences of traditional broadcasters continue to decline. And content, as well as delivery, is adapting to these new and shorter forms of delivery.
The third technological driver is the rise of misinformation and disinformation. The algorithms that engage us on social media encourage content that creates a reaction, and misinformation and disinformation provoke a reaction. Facebook claims to encourage meaningful engagement but the reality is that it encourages clickbait.
We are used to believing what we see and hear. But we are entering a world in which we have to entertain the idea that anything we see or hear is fake.
Timothy Snyder, in his 2017 book On Tyranny: Twenty Lessons from the Twentieth Century, described well the impact of a society in which we cannot believe anything:
“To abandon facts is to abandon freedom. If nothing is true, then no one can criticise power, because there is no basis upon which to do so. If nothing is true, then all is spectacle.”
Sadly, truth was already a pretty fungible idea. But this is about to get much, much worse. The tools used to generate fake content are widely available and easy to use. Deepfakes have been an unwelcome feature of every election held around the world since generative AI tools like Stable Diffusion and Midjourney became available. And no doubt, they’ll be turning up shortly in Australia.
In most countries, we have strict laws about the use of traditional media to influence elections but social media up-ends this. Elections can be won by those with the best algorithms and the most convincing lies.
Cult of ignorance
The fourth driver is an increasing dislike of, even disdain for, experts — especially scientific experts. There are many factors behind this rejection, such as increasing inequity within society (driven in part by technological disruption and globalisation), and the rise of conspiracy theories and other forms of misinformation (again driven by the algorithms).
There is, however, the deeper problem that in many countries we don’t, for bizarre reasons, value intelligence in politics. The science fiction author Isaac Asimov identified this problem back in 1980:
There is a cult of ignorance in the United States, and there has always been. The strain of anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge’.
And ignorance is nowhere more prevalent than in science and technology. It is unfortunately perfectly acceptable to be ignorant of the science and technology transforming our society. Carl Sagan again warned of the problem back in 1995:
We’ve arranged a global civilisation in which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.
The fifth driver is that digital technologies are destroying the income of old-fashioned media. This makes it harder and harder for the fourth estate to afford to shine a light on our democracy. The tech companies now take 90 per cent of all digital ad spending, and around half of the total spend on all advertising.
In Australia, the News Media Bargaining Code was a brave attempt to prevent journalism sinking by having the large technology platforms pay local news publishers for the news content made available or linked to on their platforms. But it is unclear whether the platforms are prepared to support old-fashioned media in this way in the long run.
The sixth technological driver of this breakdown in trust is that digital technologies have emptied the town square. Society depends on us having a shared vision.
Many people are unaware that the internet they see is unique to them. Even if we surf the same news websites, we’ll see different news stories based on our previous likes. And on a website like Amazon, almost every item and price we see is unique to us. It is chosen by algorithms based on what we were previously wanting to buy and willing to pay. There is little on the web that we share in common.
We often think our superpower is our intelligence. But our superpower is actually our society. It is our ability to come together and collectively work on shared problems. But to do so, we must have shared goals. And digital technologies are undermining that shared vision. We are increasingly living in our own digital bubbles.
The seventh driver of this breakdown in trust is AI-powered surveillance. Again, if you want to scale surveillance to cover a nation, then digital technologies are perfect for the job.
In China, there’s a facial-recognition system that can scan a billion faces in a minute. It can surveil essentially the whole population in real time. And it is being used to track and persecute the Uighurs — a sizeable ethnic group in north-western China. And in case you had any misunderstanding of the state’s intention, it has been helpfully called “Skynet”, the AI computer in the Terminator series.
Here in the West, we see this surface in what is wonderfully termed “surveillance capitalism”. When the product is free, you are the product. And much of the digital economy revolves around collecting and selling data about you.
Digital surveillance is insidious. Even if you aren’t actually being surveilled, the possibility that you might be changes your behaviour.
Of course, this was all the opposite of what we were promised. Technology was supposed to be a force for good. Social media was, for example, supposed to bring us together. But in reality, it has driven us apart.
How we were lured here
We should not be surprised that we have ended up here. We were seduced into this place. At the start, these digital technologies appeared to be a good thing. During the series of Arab Spring uprisings in the Middle East 2010-11, social media gave voice to those who had previously no voice. It helped mobilise those campaigning for democracy. Similarly, in the 2008 election of Barack Obama as president of the United States, social media got the young, people of colour, and other under-represented voices out to vote. It all seemed marvellous at the time.
But quickly, the same technologies were turned to less positive ends. They were used to spread untruths in the Brexit referendum, and to suppress votes in order to elect more extreme candidates like Donald Trump. And we now see states like Russia conduct election interference in other countries using these same tactics.
This is not the first time that technology has threatened to disrupt society. Other technologies, from the steam engine to electricity, have already transformed our lives dramatically. AI will, in many ways, be no different to these past transformations. It will alter almost every aspect of our lives: how we are born, live, work, play and die. But there is one way in which the AI revolution will likely be different: the speed with which it transforms our lives.
The industrial revolution took more than 50 years to play out. Electricity took several decades. Even the internet took a decade or so to take hold, as we had to get people online. The AI revolution is different. We’ve already put the plumbing in. You only need to be told the URL, or the API of some AI service and you can get to work.
Vast amounts of money are being invested in AI. In 2024, around a half-billion dollars was invested in artificial intelligence. Every day. We’ve never seen anything like this scale of investment before. And it is starting to pay off. Within a year of launching ChatGPT, OpenAI went from no income to earning over $1 billion per year and a valuation of somewhere around $100 billion. This is without precedent in the history of capitalism. AI is arguably the largest gold rush ever.
It would be easy, then, to be pessimistic. To argue that irresistible forces are in play. That the financial incentives are immense and misaligned with the public good. That these technology companies are more powerful than many countries. But there is everything still to play for. The future is not decided. It is the product of the choices that we make today.
We have the power to shape the future
Technology shapes society, but society also gets to shape technology. There are levers we can pull that will enhance our society and counter the forces pulling it apart. The fact that there are so many levers to pull should itself bring some reassurance.
First, we can hold the platforms more accountable for the content they serve, just as we hold traditional media accountable for the content they deliver. The digital platforms can no longer hide behind the argument that they are just an intermediary. Indeed, now that they are increasingly delivering AI-generated content that they created, this is plainly false.
Second, we can protect those who are easily manipulated or harmed, including those under 16. In a decade’s time, I imagine we will look back at social media like we look today at tobacco and alcohol. Young minds in their formative years need to be protected.
The recent decision to prohibit deepfake pornography is a good example of protecting those who are being harmed. But it is just one of the most obvious harms that needs to be addressed. There are many others, such as doxing and cyberbullying. And sadly, unless we do something about it, AI will likely only make such harms more prevalent.
Third, we can regulate to ensure truth in political advertising. In most countries there are strict laws about truth when advertising commercial products. But you can say almost anything you like in political adverts. In the past, if you said an untruth, everyone would see it. But now, you can tell every voter a different lie. And no-one else sees the lies that you see.
Fourth, we can regulate against deepfakes. Fake content will undermine our faith in many of our institutions. Compare this to another area where fake content would be dangerous. We rightly worry about fake money undermining our confidence in the financial system, so we have strong penalties for counterfeiting money. We need to do the same with deepfakes.
Unfortunately, it is not as simple as banning fake content. We have to balance this against maintaining freedom of speech. For instance, our ability to parody politicians is an important part of the political process. We must walk the delicate tightrope between reducing fake content and supporting satirical debate.
Fifth, we need to embrace technological measures like digital watermarking to reduce the impact of deepfakes and protect intellectual property rights. This is, in fact, the perfect application for the blockchain — we have finally found something good to do on it!
Sixth, we can restore financial support to the fourth estate for its role in shining a bright light on our democratic institutions. The News Media Bargaining Code was an attempt to do precisely this. We can double down on such initiatives, taxing the technology companies to ensure we can protect democracy by uncovering the lies and corruption that thrive in the dark.
Seventh, we can develop digital technologies that can amplify democracy. For example, more than 7000 cities around the world are using participatory budgeting to decide budgets from states, housing authorities, schools and other institutions. The New York Times has called it “revolutionary civics in action”. It deepens democracy, builds stronger communities, and can more fairly distribute public resources.
Eighth, we can use digital technologies to increase transparency. Sadly, the long-running saga around the attempted extradition of Julian Assange to the United States has distracted attention from the revolutionary potential of websites like WikiLeaks. Such initiatives encourage and protect whistleblowers, helping to preserve democracy by shining a light on wrongdoing. The internet was, and still is, a powerful force to improve democratic transparency.
Ninth, we can strengthen our digital privacy. In many respects, we are at the technological low point in terms of our privacy. To do anything interesting, we need to share our data with the tech giants and their AI algorithms. But advances like federated learning, where AI models are trained without sharing our personal data with the tech giants, promise to give us back our privacy. Indeed, AI will increasingly be smart and small enough so that our data doesn’t leave our devices.
Tenth, we can ensure that access to digital technologies is a fundamental human right. If we do not, the world will divide into the digital haves and have-nots. Access to the internet is becoming as important as other basic rights, like freedom of speech. We learnt during COVID lockdowns how many children in Australia did not have access to a single device at home on which to access the web. This cannot continue.
The future in our hands
Ultimately, digital technologies like AI have the potential to increase trust. But we need to make some good choices to ensure that they do.
People often ask me if I’m optimistic or pessimistic about the challenges that AI poses. I say that I’m both. I’m optimistic that AI will ultimately bring great benefits.
But in the short term, I’m pessimistic. Sadly, our children are set to inherit a worse world, due to a raft of problems, from the climate emergency to global insecurity, and, as I’ve outlined here, distrust in the very institutions that we now need most.
The future requires us to be careful, smart, and committed to using AI to build, not break, our trust in society. And if we use technologies like AI wisely, we might look back at this time as the start of a new era of increasing, and not decreasing, trust in our democratic institutions.
So, do we trust AI? Maybe not yet, but with the right choices, we could.
This is an edited extract from Age of Doubt: Building Trust in a World of Misinformation, edited by Tracey Kirkland and Gavin Fang, released in March by Monash University Publishing.
Credits
Words: Toby Walsh, Chief Scientist at the UNSW AI Institute, Australia
Illustration: Gabrielle Flood