'Killer AI' is real. Here's how we stay safe, sane and strong in a brave new world

Artificial intelligence will transform our lives in countless ways and holds great promise. But it could also kill us. We've developed a framework that will help us stay safe.

The rapid advancement of artificial intelligence (AI) has been nothing short of remarkable. From health care to finance, AI is transforming industries and has the potential to elevate human productivity to unprecedented levels. However, this exciting promise is accompanied by a looming concern among the public and some experts: the emergence of "Killer AI." In a world where innovation has already changed society in unexpected ways, how do we separate legitimate fears from those that should still be reserved for fiction?

To help answer questions like these, we recently released a policy brief for the Mercatus Center at George Mason University titled "On Defining ‘Killer AI.'" In it, we offer a novel framework to assess AI systems for their potential to cause harm, an important step towards addressing the challenges posed by AI and ensuring its responsible integration into society.

AI has already shown its transformative power, offering solutions to some of society's most pressing problems. It enhances medical diagnoses,accelerates scientific research, and streamlines processes across the business world. By automating repetitive tasks, AI frees up human talent to focus on higher-level responsibilities and creativity. 

The potential for good is boundless. While optimistic, it’s not particularly unreasonable to imagine an AI-fueled economy where, after a period of adjustment, people are significantly healthier and more prosperous while working far less than we do today.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

It is important, however, to ensure this potential is achieved safely. To our knowledge, our attempt to assess AI’s real-world safety risks also marks the first attempt to comprehensively define the phenomenon of "Killer AI." 

We define it as AI systems that directly cause physical harm or death, whether by design or due to unforeseen consequences. Importantly, the definition both encompasses and distinguishes between physical and virtual AI systems, recognizing that harm could potentially arise from various forms of AI. 

Although their examples are complex to understand, science fiction can at least help illustrate the concept of physical and virtual AI systems leading to tangible physical harm. The Terminator character has long been used as an example of the risks of physical AI systems. However, potentially more dangerous are virtual AI systems, an extreme example of which can be found in the newest "Mission Impossible" movie. It is realistic to say that our world is becoming increasingly interconnected, and our critical infrastructure is not exempt.

ASK A DOC: 25 BURNING QUESTIONS ABOUT AI AND HEALTH CARE ANSWERED BY AN EXPERT

Our proposed framework offers a systematic approach to assess AI systems, with a key focus on prioritizing the welfare of many over the interests of the few. By considering not just the possibility of harm but also its severity, we allow for a rigorous evaluation of AI systems’ safety and risk factors. It has the potential to uncover previously unnoticed threats and enhance our ability to mitigate risks associated with AI. 

Our framework enables this by requiring a deeper consideration and understanding of the potential for an AI system to be repurposed or misused and the eventual repercussions of an AI system’s use. Moreover, we stress the importance of interdisciplinary stakeholder assessment in approaching these considerations. This will permit a more balanced perspective on the development and deployment of these systems. 

CLICK HERE FOR MORE FOX NEWS OPINION

This evaluation can serve as a foundation for comprehensive legislation, appropriate regulation, and ethical discussions on Killer AI. Our focus on preserving human life and ensuring the welfare of many can help legislative efforts address and prioritize the most pressing concerns elicited by any potential Killer AIs

The emphasis on the importance of multiple, interdisciplinary stakeholder involvement might encourage those of different backgrounds to become more involved in the ongoing discussion. Through this, it is our hope that future legislation can be more comprehensive and the surrounding discussion can be better informed.

While a potentially critical tool for policymakers, industry leaders, researchers, and other stakeholders to evaluate AI systems rigorously, the framework also underscores the urgency for further research, scrutiny, and proactivity in the field of AI safety. This will be challenging in such a fast-moving field. Fortunately, researchers will be motivated by the ample opportunities to learn from the technology.

AI should be a force for good—one that enhances human lives, not one that puts them in jeopardy. By developing effective policies and approaches to address the challenges of AI safety, society can harness the full potential of this emerging technology while safeguarding against potential harm. The framework presented here is a valuable tool in this mission. Whether or not fears about AI prove true or unfounded, we’ll be left better off if we can navigate this exciting frontier while avoiding its unintended consequences.

Nathan Summers is an LTS research analyst. He is the co-author of a recent Mercatus Center at George Mason University study, "On Defining ‘Killer AI.’"

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.