The Misplaced Fear of Artificial Intelligence: A Historical Perspective and Future Pathways

This article delves into the history of technophobia, its evolution, and the real fears associated with Artificial Intelligence (AI). It traces the origins of technophobia to how science fiction has often portrayed technology as a villain, reflecting societal anxieties. The article discusses how these fears have evolved over the decades and argues that the real risks of AI are more nuanced than the autonomous, villainous machines often depicted in media.

The Misplaced Fear of Artificial Intelligence: A Historical Perspective and Future Pathways

The meteoric rise of Artificial Intelligence (AI) applications like ChatGPT has made AI suddenly a part of our everyday lives in the past months. This has sparked a lot of conversation about the potential and risks of AI. However, a lot of these discussions are based on misunderstandings about what AI really is and how it works. It’s important that we steer these conversations towards the real and immediate challenges that AI brings.

This article will cover the following aspects:

  1. The Origins of Technophobia
  2. The Evolution of Technophobia
  3. The Real Fear of AI
  4. How (not) to regulate AI
  5. How to prevent a disaster
  6. Conclusion
  7. Bonus: A glimpse of Technophilia

In case you don’t feel that you could explain what AI is in your own words, feel free to check out this simple explanation in plain language (~7min read) here on tenet.ventures.

The Origins of Technophobia

Artificial Intelligence (AI) has been a source of fascination and fear since the dawn of science fiction. The roots of the genre of fictional science, later dubbed Sci-Fi can be traced back to the 1800s, when Luigi Galvani’s experiments with electricity on human corpses sparked a fascination with the potential of technology. Most notably, it has inspired Mary Shelley’s classic “Frankenstein”, where technology is used to breathe life into a lifeless being, creating a monster.

This fascination often took a dark turn, as authors and filmmakers used technology as a symbol of fear and control, reflecting society’s anxieties about the unknown. Since the early days of science fiction, technology was often portrayed as a villain. As the first industrial revolution raged, humans left the agricultural life to work in factories, becoming cogs in a machine.

Maria, an evil Artificial Intelligence controlling humanity in Fritz Lang’s “Metropolis“ (1927)

It’s incredible to see that the thought of an evil artificial intelligence controlling humanity is already a century old. In Fritz Lang’s 1927 film “Metropolis”, a robot, disguised as a benevolent prophet, manipulates the working class of a divided society to incite a violent revolt. In Karel Čapek’s work RUR or “Aelita: Queen of Mars” from 1920, a Soviet engineer travels to Mars where he falls in love with a robot who together overthrow the human regime. “Rossum’s Universal Robots” depicting artificial humans rising up against their creators.

Historians and social psychologists agree that the rise of Science Fiction is directly related to the first industrial revolution. Humans had left the agricultural life in connection with nature to work in lifeless factories, where they became a cog in someone else’s machine — cheap and replaceable. It’s important for us to understand the parallel to what our society is currently going through.

The Evolution of Technophobia

The fear of the new socioeconomic system continued to be portrayed in mass media in the 1930s. Notably, Charlie Chaplin’s “Modern Times” in 1936 clearly depicts the dehumanizing effects of the industrial revolution.

Humans as a cog in a lifeless machine in Charlie Chaplin’s “Modern Times” (1936)

The core themes of the coming decades then dictated the course of dystopian Science Fiction and Technophobia:

  • During World War II in the 1930s and 1940s, focus shifted away from fiction. However, fueled by ‘Foo Fighter’ sightings from Air Force pilots during the war, later renamed ‘Unidentified Flying Object’ (UFO), Science Fiction received a new focus point: Alien life and interplanetary exploration.
  • During the Cold War in the 1960s-1980s, the fear of technology returned, but mostly with the Cold War sparking fears of nuclear warfare and mismanaged technological advancements.

The narrative during these eras also emphasized the importance of technical superiority for world domination, whether by humans, machines, or aliens.

The 1990s saw a resurgence of the fear of losing control to machines, as depicted in The Terminator and The Matrix. The 2000s, however, brought a new theme to the fore: the ethical and social dilemmas of working with artificial intelligence, as seen in “I, Robot”, “Her”, and “Westworld”.

The Real Fear of AI

Let’s take a step back.

Despite the varied themes, one common thread runs through these narratives: the ‘new technology’ is often portrayed as an autonomous agent with full authority, a villain embodying society’s fears. This portrayal, particularly prevalent in the U.S., has fueled technophobia.

However, new technologies like Artificial Intelligence are not just a mere tactic to drives box office success of mass media. The real risks of AI are not about autonomous machines turning against us. They are more subtle and insidious:

  • Mass manipulation of people to the extent where the machine may persuade or manipulate its creators.
  • Autonomous machine-building machines that may turn against its creators.
  • Warring nations deploying regenerative but irreversible technology, where humanity loses control and is thrown back into the Stone Age.

Other severe risks include bias and discrimination, economic disruption, weaponization, and catastrophic accidents.

How (not) to regulate AI

Technology is neither bad nor good. It will have to be carefully harnessed and managed with thought and ethics.

In the past weeks, large organizations like OpenAI, Google and Microsoft have contributed to the discussions of how to best regulate AI. This opens up two areas of conflict of interest:

  1. They may also have an incentive to influence or evade the rules that would limit their power, profits, or innovation.
  2. They have an incentive to increase the barriers to entry in the industry for smaller startups by requiring more time-consuming and costly regulatory compliance.

Although regulating AI will be vital and makes a lot of intuitive sense, we’ll have to be careful not to increase the regulatory compliance stipulations too high as it could stint innovation. A perfect example is the finance industry, where important regulation has led to a lot of smaller banks having to close, creating a lot of similar banks, increasing the risk profile of an economic meltdown.

How to prevent a disaster

Going back to geopolitical game theory, the answer on how to prevent a catastrophe that could impact or exterminate humanity is clear: international cooperation. Whether this is a new UN organization to deal with technology-related matters or member states coming together to sign ‘no brainer’ agreements is yet to be decided. Examples include the prevention of:

  • Autonomous AI that aims to penetrate and shut down the critical infrastructure of any nation.
  • Society-influencing algorithms making autonomous decision making without a ‘human in the loop’.
  • Advanced machine learning and Generative AI to manipulate the political opinions of a population through social media.
  • Machine-creating machine with the intent of resembling some form of autonomous weapon.
  • Artificial general intelligence capable of making autonomous decisions in warfare scenarios without a ‘human in the loop’.

Next to preventing risks that could annihilate humanity, it will also be important to uphold ethical standards in each country. For example, independent audits to identify risks and biases of training data may be required to prevent biased training of Machine Learning as long as it does not stint developments or centralizes industry power with incumbents.

Conclusion

In conclusion, while the fear of AI is deeply ingrained in our culture, it is crucial to remember that these fears are often based on the worst-case scenarios depicted in science fiction. The real risks of AI are more nuanced and require international cooperation and regulation to mitigate. By focusing on these real risks and working together, we can ensure that AI is used for the benefit of all, rather than becoming the villain of our own creation.

Bonus: A glimpse of Technophilia

In contrast, technophobia is not necessarily applicable for the entire world. Japan, for instance, has managed to circumvent technophobia by embracing technology as a partner rather than a threat, as seen in their positive depictions of robots and AI in popular culture.

[top to bottom, left to right] Ultraman, Giant Gundam Robot in Yokoyama, famous and beloved robot ASIMO retiring after 22 years, Robot taking order at SoftBank cafe in Tokyo

So, what caused the difference to the West’s fear of technology?

In the 1800s, Japan’s ruling class realized that it massively lagged behind in technology compared to Western powers and feared that it could face the same fate as China. In a peaceful transition, the Shogun (leader of the Samurai), transfered power back to the Emperor. What followed was a period of rapid industrialization and modernization in the late 19th and early 20th centuries, which fostered a culture of innovation and adaptation to new technologies.

After World War II, Japan’s post-war economic miracle relied heavily on technological development and exports of high-quality electronics, automobiles, and other products. This was further fueled by Japan’s demographic challenges creating a demand for technology to support social welfare, health care, and productivity as well as geographic constraints which require technology to optimize energy efficiency, disaster prevention, and urban planning.

In a way, many Western countries now face a similar dilemma of lagging behind in technology, facing demographic challenges, and facing geographic constraints.

This article was originally posted by our portfolio company Q by TENET.

let's build something great

Ready to join the team?

JOIN US

Want to build a business?

LET'S TALK