While artificial intelligence (A.I.) might serve as the cornerstone in the next evolution of self-driving cars, virtual personal assistants, and a wide array of other modern amenities that our industry – and many others – intends to lean on quite heavily moving forward, this cutting edge frontier of the tech world isn't without its own unique set of potential pitfalls and hazards. In fact, leading voices in the digital community, including Elon Musk and Stephen Hawking, warn that the race to human-level A.I. has the marketplace on the verge of an "arms race" that could spiral out of control.
So just how do these tech and science luminaries hope to safeguard humanity from an arms race that could lead to uncontrolled – and potentially harmful – A.I. systems that go rogue or are used for other nefarious purposes? According to Newsweek's Anthony Cuthbertson, regulating the scope of power and influence held by these computer systems starts with a series of guidelines – 23 to be exact. Known as the Asilomar A.I. Principles, these rules greatly expand upon science fiction writer Isaac Asimov's "Three Laws of Robotics," which serve not only as the foundation of "Runaround," his iconic 1942 short story, but also the modern ethics debate regarding A.I. and robotics.
Digging a little deeper, the Asilomar Principles, officially penned by the members of the Future of Life Institute (FLI), aim to foster shared responsibility – and shared prosperity – within the tech community. Additionally, safeguards, such as the "big red button" principle for dealing with A.I. that has gone rogue, also serve as hallmarks of this comprehensive document.
As of now, these guidelines and regulatory procedures have received support and signatures from over 700 A.I. specialists, researchers, and other industry luminaries. However, even more are planning to step up and join the chorus of those calling for a larger safety net for A.I. research and development.
To put this all in simple and straightforward terms, the best and brightest in this field are taking the threat of unregulated growth within the field of A.I. very seriously, thereby heralding in a new age of accountability and safety within this marketplace.
Are you interested in learning even more about the FLI's Asilomar Principles, the conversation surrounding the burgeoning A.I. industry arms race, and how everyone from Google's DeepMind team to philosophy professors at the University of Oxford are offering up their support for the first public discourse on the need for A.I. safeguards and oversight? Then go ahead and give the link below a click so that you catch up with Cuthbertson and the rest of the Newsweek team as they dig into all of the details of this groundbreaking industry movement.