April 24, 2024

Generally, when artificial intelligence (AI) is brought up in conversation, it is in the context of science fiction. When people attempt to relay serious concerns about possible dangers of AI, they are met with either derision, denial or concerns for the job market. However, AI, superintelligent general AI to be more specific, is considered to be one of the biggest and most pressing existential threats by some of the brightest minds alive today.

Superintelligent narrow AI already exists. A pocket calculator is a form of narrow AI that is far superior to the greatest mathematician to have ever lived at performing arithmetic. And we have created much more powerful AI. Back in 2015, AlphaGo, an AI developed by Google’s Deepmind, became the first AI to defeat a professional human go player (go being an ancient strategy game considered far more complex than chess). Now AlphaZero, a successor to AlphaGo, has beaten the previous world’s best chess-playing AI, which was already superhuman, after taking only four hours to learn the game.   

Now consider that, barring an unimaginable disaster that wipes out all human knowledge, humankind will continue to improve AI technology. Assuming that intelligence is just information processing and that there is nothing especially unique or vital about what material in which information is processed (our slow, soggy brains or lightning-fast computer processing units), it is inevitable that we will eventually build something approximating human-level intelligence. But, as one of the many brilliant minds drawing attention to this issue, Sam Harris points out in his TED Talk on the subject, the moment we have human-level AI, we have superhuman AI. Computer circuits operate about a million times faster than the biological circuits making up our brains. So if a human-level AI was created and put to work on some intellectual task for a week, it would perform 20,000 years of human level intellectual work. And a human-level AI would be able to make changes to itself, very quickly making itself superhuman by more than the virtue of speed.  

Upon being confronted with this information, many people’s gut response is to fear a Terminator-style conquest of the machines. But, as is pointed out by computer scientists such as Eliezer Yudkowsky, co-founder at the Machine Intelligence Research Institute, the danger is not a malevolent AI, but a competent one.

Consider the paper-clip maximizer thought experiment proposed by the Oxford philosopher Nick Bostrom: Imagine that an AI with general intelligence beyond what humans can conceive is built and tasked with making as many paper clips as possible. Obviously, there would be a point where we humans would have more than enough paper clips, but at that point, us attempting to stop the AI would likely be interpreted as an obstacle to making more paper clips to be fought against with every resource available. Eventually, the AI may simply decide that the atoms making up our bodies would be better utilized as paper clips and destroy humanity in an attempt to create as many paper clips as possible.

Obviously, this is an outlandish and extreme thought experiment. But, it highlights the idea of alignment. If we ever do create a superintelligence, then it is imperative that its goals be aligned with ours, because even the slightest deviation could spell disaster.

Pursuing AI is something that is being done and is, in fact, important work. Humanity has problems we’d like to solve. Curing cancer and genetic diseases, eliminating starvation and poverty, building healthy and stable economies, fixing the climate and preserving the natural world are all issues we would like to see addressed in an effective and timely manner. More intelligence, the ability to process more information, is clearly desirable. But warnings of the dangers of superintelligence from people such as Stephen Hawking, Bill Gates and Elon Musk should not be ignored. Close attention should be paid to people like Yudkowsky who are working on what values should be hardwired into AI, because, while the timeline is unclear and debates still rage over what the real dangers are, why roll the dice with the future of humanity?

Leave a Reply