Yuval Harari’s Warning: Learn to Master AI (Before It Rules Us)

Yuval Harari’s Warning: Learn to Master AI (Before It Rules Us)
Yuval Harari’s Warning: Learn to Master AI (Before It Rules Us)

Israeli historian Yuval Harari

Since the arrival of ChatGPT, the most advanced artificial intelligence (AI) algorithm that exists, launched last November by the startup Open IA, several entrepreneurs, scientists and intellectuals have dedicated themselves to discussing the impact of this tool in building the future of humanity.

After Bill Gates, who this week posted a long text on his website in which he describes the potential of technology and how it can change the world with its application in areas such as education and health, it’s time for Israeli historian Yuval Noah Harari, author of “Sapiens” and “Homo Deus”, enter this debate.

Unlike Gates’ optimistic tone, Harari warns of the ability of artificial intelligence – especially its most advanced tool, GPT-4 – to manipulate and generate language, which he describes as “the operating system of human culture”. And that may not be good for humans.

Harari discusses the topic in an instigating article published this Friday, March 24, in the newspaper The New York Times. The Israeli thinker signs the text alongside two other respected thinkers in the technology field, creators of the Human Technology Center (CHT), a non-profit institution that advocates the ethical use of digital technology.

One of them is Tristan Harris, a former Google computer scientist who became a critic of the social ills fostered by social networks and big techs. The other is Aza Raskin, creator (today regretful) of infinite scrolling or scrolling, a technique that allows the user to continue scrolling the contents on the screen in an unlimited way, used on sites like Facebook, Instagram and Pinterest.

In essence, Harari, Harris and Sakin discuss the transition in the history of civilization that we are going through. They observe that human beings often do not have direct access to reality. Everything we experience, learn and call reality passes through a cultural prism that was developed from the experiences of other human beings.

The challenge posed by AI, according to them, is to experience reality through a filter produced by a non-human intelligence.

“For thousands of years, we humans have lived inside the dreams of other humans,” says one of the excerpts from the article. “We worship gods, pursue ideals of beauty, and dedicate our lives to causes that sprung from the imagination of some prophet, poet, or politician. Soon, we too will find ourselves living inside the hallucinations of non-human intelligence.”

Among the initial warnings, the authors draw attention to the fact that this transition, through systems such as GPT-4, should not be done at a faster pace than cultures can safely absorb them.

“A race to dominate the market should not define the speed of deployment of humanity’s most important technology. We must move at a speed that will allow us to do this right,” they write.

“A race to dominate the market should not define the speed of deployment of humanity’s most important technology”

This aspect is important because, according to the authors, most of the key abilities of artificial intelligence tools boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images – hence the definition of language as the “operating system of human culture”.

“From language emerge myths and laws, gods and money, art and science, friendships and nations – even computer code,” the authors write, adding that AI’s new mastery of language means it can now hack and manipulate the system. operation of civilization. “By gaining mastery of language, AI is seizing the master key of civilization, from bank vaults to holy tombs.”

Provocation

The authors continue with a provocation: what would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies and tools are shaped by a non-human intelligence?

The answer comes in the form of a new warning: “AI could quickly consume all of human culture, digest it and start producing a flood of new cultural artifacts. Not just school essays, but also political speeches, ideological manifestos, and even holy books for new cults. In 2028, the US presidential race may no longer be contested by humans.”

When developing the reflection on the impact of mastering language, Harari, Harris and Sakin recall that the specter of being trapped in a world of illusions has haunted humanity for much longer than the specter of AI. “Soon we will finally be face to face with Descartes’ demon, with Plato’s cave, with the Buddhist Maya.”

This transition, the authors note, already started with social media, which they classify as the first contact between AI and humanity, and lost humanity.

“In social media, primitive AI was used not to create content, but to curate user-generated content. The AI ​​behind our news feeds is still choosing which words, sounds and images hit our retinas and eardrums, based on selecting the ones that will go viral the most and get the most reaction and the most engagement,” they wrote.

“The AI ​​behind social media was enough to create a curtain of illusions that increased social polarization”

According to the authors, although very primitive, “the AI ​​behind social media was enough to create a curtain of illusions that increased social polarization, undermined our mental health and destroyed democracy”.

Because of this, millions of people have confused these illusions with reality. “The United States has the best information technology in history, but American citizens can no longer agree on who won the elections”, they compared.

Alerts

The authors dedicate the final part of the article to issue some warnings. The first of them: “Big language models are our second contact with AI. We cannot afford to lose again.”

While acknowledging that AI has the potential to help us defeat cancer, discover life-saving drugs, and invent solutions to our climate and energy crises, they warn that “no matter how high the skyscraper of benefits AI builds, the foundation to crumble afterwards”.

“The first step is to buy time to upgrade our 19th century institutions for a post-AI world”

“The time to rely on AI is before our politics, our economy and our everyday lives become dependent on it”, they warn. “Democracy is a conversation, the conversation depends on language, and when language itself is hacked, the conversation stops and democracy becomes unsustainable.”

Harari, Harris and Sakin close the article with an appeal to world leaders to respond to this moment at the level of challenge it presents. “The first step is to buy time to upgrade our 19th century institutions to a post-AI world and learn to master artificial intelligence before it dominates us.”

The article is in Portuguese

Tags: Yuval Hararis Warning Learn Master Rules