What Happens If AI and ChatGPT Gets Out Of Control? Experts Weigh In

Feb 25, 2023

What Happens If AI and ChatGPT Gets Out Of Control? Experts Weigh In


Artificial intelligence has become an increasingly hot topic in recent years. While there are many potential benefits to this technology, there is also a growing concern about the potential dangers that it poses. The fear is that AI could one day become so advanced that it surpasses human intelligence and becomes uncontrollable. This is a serious concern, and one that has prompted many experts to weigh in on the issue. Here, we take a closer look at what some of the world's leading AI experts have to say on the matter.


Take Fears About AI Seriously


Many experts believe that the development of AI poses a significant risk to humanity. This is because if AI becomes more intelligent than humans, it could pose an existential threat to our species. As Nick Bostrom, director of the Future of Humanity Institute at Oxford University, puts it, "The transition to machine superintelligence is a very grave matter, and we should take seriously the possibility that things could go radically wrong."


According to Bostrom, this should motivate top talent in mathematics and computer science to research the problems of AI safety and control. If we fail to do so, the consequences could be catastrophic. As Joanna Bryson, a computer science professor at the University of Bath and affiliate at Princeton's Center for Information Technology Policy, warns, "If AI contributes to campaigns being able to manipulate voters into not bothering to vote based on their social media profiles, then we should be very afraid."


Failure to specify objectives correctly is one obvious risk. This could lead to behavior that is undesirable and has irreversible impacts on a global scale. For example, if we specify the wrong objectives for a self-driving car, it could lead to accidents and fatalities. While we will probably figure out decent solutions for this "accidental value misalignment" problem, it may require some rigid enforcement. However, the most likely failure modes are the gradual enfeeblement of human society as more knowledge and know-how resides in and is transmitted through machines, and the loss of control over intelligent malware and/or deliberate misuse of unsafe AI for nefarious purposes.


We Need to Understand the Implications of AI


Another issue that many experts raise is the need to fully understand the implications of AI. As Demis Hassabis, co-founder of DeepMind, explains, "We're creating these incredibly powerful tools, but we're not yet sure how to use them in the best possible way." To avoid disastrous consequences, it is important to study the ethical implications of AI and to ensure that the technology is developed in a way that is safe and beneficial to humanity.

This sentiment is echoed by Stuart Russell, a computer science professor at the University of California, Berkeley. Russell argues that we need to develop AI that is aligned with human values. "The risk is that if we build machines that are more intelligent than us, they will end up doing things that we don't want them to do," he warns.


The Problem with AI "Superintelligence"

One of the most significant concerns about AI is the idea of "superintelligence." This refers to the possibility that AI could become so intelligent that it surpasses human intelligence and becomes uncontrollable. As Max Tegmark, a professor of physics at MIT, explains, "Superintelligent AI is the number one existential risk we face as a civilization." The fear is that if AI becomes superintelligent, it could quickly surpass our ability to control it, leading to catastrophic consequences. Not a situation we all want to be in, right?

However, not everyone agrees with this assessment. Some experts argue that the fear of superintelligent AI is overblown. Gary Marcus, a professor of psychology at New York University, argues that "superintelligence is a myth." According to Marcus, "AI is only going to continue to get better at specific tasks, not surpass humans in every domain of intelligence." Nevertheless, even if superintelligent AI is unlikely to occur, it is still important to take the potential risks of AI seriously and to work to ensure that this technology is developed in a way that is safe and beneficial to humanity.


The Importance of AI Safety Research


Given the potential risks of AI, many experts argue that safety research should be a top priority. As Stuart Russell puts it, "The stakes are too high to not do safety research." This sentiment is echoed by Elon Musk, the CEO of SpaceX and Tesla. Musk has been a vocal advocate for AI safety research, warning that "AI is a fundamental risk to the existence of human civilization."


To address these risks, researchers are working to develop AI systems that are safe, transparent, and aligned with human values. This includes developing methods for ensuring that AI systems are transparent and explainable, so that we can understand how they make decisions. It also involves working to ensure that AI systems are aligned with human values and goals, so that they are less likely to pose a threat to humanity.


Ultimately. Do we have anything to be scared about?


The potential dangers of AI are a serious concern, and one that demands our attention. While AI has the potential to benefit humanity in countless ways, it also poses significant risks if not developed and used responsibly. As AI continues to advance, it is important that we take seriously the potential risks and work to develop this technology in a way that is safe and beneficial to humanity. By doing so, we can help to ensure that AI continues to be a force for good, rather than a threat to our existence.