AS HAL SAID TO DAVE:

“THIS MISSION IS TOO IMPORTANT FOR ME TO ALLOW YOU TO JEOPARDIZE IT.”

It’s a line from the 1968 classic film, 2001: A Space Odyssey, produced and directed by Stanley Kubrick who wrote the screenplay with science fiction author Arthur C. Clarke. Hal was an early, though fictional, manifestation of A.I., Artificial Intelligence, with human traits.

Regarded as one of the most influential films of the 20th century 2001’s  narrative is about many things, one being the human-like supercomputer, the HAL 9000, that makes an error but is unable to recognise or acknowledge the error, resulting in the death of all the crew of the spacecraft, except Dave, who manages to disconnect Hal. 

Artificial Intelligence has been big in the news with the release of numerous software packages, broadly termed ‘generative pre-trained transformers’ (GPT); ChatGPT is but one of many.  Reception has been mixed, some seeing only sunny uplands of data processing, others, dark replicas of the Hal 9000. 

GPT A.I. programs are artificial neural networks, that is, electronic synapses with numerous links to other synapses that mimic our brains, and can access large data-sets, so able to generate new texts drawn for the accumulated data-sets. And these data-sets are continuously expanding. 

And remember, A.I. isn’t just restricted to words: it can do music, moving and static images and mimic voices,  if it has a sufficient sample to draw on.

Today, simpler Hal 900’s descendants are already here. The aggregations of algorithms that run Facebook or Google or TikTok have already achieved much of Hal’s capacity for awareness of purpose, lacking, perhaps, only self-awareness or conscience. 

The trouble is, there is evidence that no one knows how the whole aggregation of algorithms works, or what they are capable of.

In 2021, as the Commonwealth government introduced a News Bargaining Media Code, to oblige Facebook and Google (and others) to compensate media companies for the companies’ contents they published on their sites. Facebook closed its Australian news-feed to retaliate.  The shut-down of news-feeds had an unexpected outcome: hundreds of other sites, many not-for-profit charities and arts organisation were shut out. 

This collateral damage adds weight to an argument that I have advanced since Mark Zuckerberg’s appearance before the US Senate in November 2017. Not surprising, the transcript revealed that the Senators knew little about how Facebook functions. But, more surprising,  it seemed to confirm that Zuckerberg’s own knowledge of the subtleties of Facebook was limited, if he was speaking truthfully to the US Senate.

That should be unsurprising. Facebook is run by this aggregation of algorithms, interacting in complex ways with little human intervention. Without doubt, individual parts are understood by its programmers but not how the whole functions as an organism. It is now a massive piece of artificial intelligence capable of acting outside human command, at least in the short term. 

And with this, computer servers across the world, there’s no OFF switch for Dave to find!

Because of this lack of knowledge of overall function, the programmers, who designed the shutdown of Australia news-feed, were not able to anticipate the collateral damage to arts, health and civic sites, because the differences were too subtle for the present algorithms to discern. But, endowed with artificial intelligence, the algorithms will learn. 

Without doubt, advanced A.I. will have some impact on just about every aspect of human life, and, in particular, the what and way we communicate, what we choose to believe or trust, and, hence, what we do and the way we are governed.

At the International News Media Association’s World Congress of News Media in New York at the end of May, the chief executive of News Corporation, Robert Thomson, said ‘journalism content is under serious threat and it is being harvested, scraped and used to train [A.I.] engines that ultimately undermines the work of reporters’.

His statement touches on several concerns: since generative A.I. harvests what has already been written, then re-purposes the content, it is not an original story; original reporting. And because the generative AI output is a synthesis of all the material the AI can find, it can recycle fake news as well as true reports, and create an amalgam of both fact and fiction. In addition the input material might be intentionally manipulated, flooded with fake stories, and A.I. might judge the fake news to be the true story, because of an apparent ‘consensus’.

A third concern is that people may choose to use only AI sources, so compromising the original reporting on which the generative output is based and undermining the economic viability of original reporting. This was Thompson’s main thrust, one supported by several other media executives present.

But, as far as I can determine from the edited text of his speech, Thompson did not touch on the impact of AI-digested news on the political information economy, that is, the flows of information, news and opinion we rely on to inform our choose of nearly everything, from the make of a new car to the make-up of the local, state and Commonwealth governments. That is, influence democratic governance.

But some see a greater threat. On May 30,  more than 350 top A.I. professionals including Sam Altman chief executive of OpenAI, creator of ChatGPT, and the chief executive of Google DeepMind, Demis Hassabis, and 37 of his co-workers, signed a one-sentence open letter to the public that aimed to put the risks of the rapidly developing technology for humankind, in stark terms.

They said: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

Their principal concern is that the technology has the potential to grow sentient and, just as Hal sought, attempt to destroy humans in some way, to preserve its own being.  Earlier in the year, a different public letter gathered more than 1,000 signatures from members of the academic, business and technology worlds who called for an outright pause on the development of new AI models until regulation could be put into place.

For me, the greater and more immediate danger is the impact on knowledge and trust in the media and the consequences of misinformation being gilded with a veneer of truth by A.I.  When we don’t know what to believe or who to believe, we lose trust in our community and society becomes ungovernable. 

About 30 years ago, I wrote a whimsical piece for ABC Radio’s Ockham’s Razor, titled Silicon Futures, envisaging a world where a silicon-based life-forms replaces the present carbon-based life forms, like us. That is, A.I. — silicon life, replaces humankind.

Vincent O’Donnell.

Media analyst/Media researcher

Leave a Reply

Your email address will not be published.