Elon Musk Considers AI to be a Bigger Threat Than Nuclear Weapons
It is not the first time Elon Musk stirs the pot by claiming how dangerous developments in artificial intelligence can be. While the previous warnings have mostly fallen on deaf ears, a lot of people are still taking notice. While it is odd to hear a tech entrepreneur “diss’ a technology which can make all of his companies and products more versatile, it is important to keep an open mind to what he is saying. Claiming how AI can be more dangerous than nuclear weapons, however, is bound to get some attention, albeit not necessarily for the right reasons.
To put Musk’s comment in perspective, it would appear he asks for more regulatory oversight. That is only to be expected, primarily because this industry has been left alone by the powers that be for the most part. Contrary to what some people might expect, there are a lot of developments taking place in the AI space. Some of this progress can even result in the creation of a “super intelligence’, which is what Musk seems most concerned about. It is not impossible society’s own AI creation will eventually become smarter than the humans who created it in the first place.
Elon Musk further clarifies his thoughts as follows:
“The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”
One has to keep in mind Musk is in a somewhat privileged position when it comes to AI and deep learning development. From his own experiences, he has noted how AI is capable of achieving far more than most people would expect at this time. Moreover, the rate at which artificial intelligence solutions continue to learn and improve is astronomical. When keeping that train of thought in mind, it is only normal to assume things could spiral out of control sooner or later. Only time will tell if that will be the case, as this “doom scenario” has not come true as of yet.
The main question is whether or not regulators would be able to keep AI research in check before it is too late. After all, regulating this industry is one thing, but ensuring no one gains a competitive advantage is something else entirely. While one can acknowledge the need for a safe environment to develop and artificial intelligence, its effectiveness will always remain in question. Maintaining an open dialog between officials and industry experts is a good place to start. However, no country or continent has shown any definitive interest in regulating AI development and research at this time.
It seems likely to assume Elon Musk will continue to warn about the potential dangers of artificial intelligence. While he doesn’t sweat the “small developments” at this time, the super intelligence aspect is slightly more worrisome. Pursuing the option of achieving a symbiosis between humans and AI is a possible course of action to ensure the research and development can continue without any major problems. One of Musk’s own companies, known as Neuralink, is creating a high bandwidth interface between Ai and the human brain at this time.