Why do the leaders of the technology world want to suspend the development of artificial intelligence temporarily?

“artificial intelligence-We urges laboratories working on developing artificial intelligence systems stronger than GPT-4 to suspend such development of artificial intelligence for at least six months. This suspension should be made public and verifiable. If such a temporary suspension is not implemented soon, then the government should take appropriate action by imposing a temporary ban.”

The above words are written in an open letter, about which many discussions have already started in various circles. Those who wrote or signed the letter requested that efforts to improve GPT-4, which was recently opened to all, be suspended for at least six months. In the letter, it was also stated that this suspension should be carried out properly. However, more needs to be said about how the authenticity of this suspension will be verified. Government intervention is also reportedly used as necessary.

The open letter in which the above calls have been made is published by a non-profit organization named ‘Future of Life Institute.’ Misuse of technology threatens people’s lives, which is what this organization works on. An artificial intelligence firm called OpenAI introduced ChatGPT, a chatbot, to the public on November 30 of last year. If you ask a question or give instructions to this chatbot, the question will be answered quickly, or the information requested in the instructions will be provided.

Although there has generally been discussion and criticism of artificial intelligence before, the introduction of this chatbot has given it a new depth. OpenAI announced the arrival of GPT-4, the fourth iteration of the GPT series, on March 14 of this year. The new version was more accurate and efficient than the old one.

With the release of GPT-4, it’s clear from experience that Microsoft and Open AI are continuing to push for a better version of artificial intelligence. A lot of other laboratories are still working on creating artificial intelligence systems that are more powerful than GPT-4 in addition to these two debates. Because of this, an open letter was published on March 22, which more than eleven hundred scientists, technology leaders, and entrepreneurs signed. Elon Musk, Steve Wozniak, Joshua Benzio, and Yuval Noah Harari are among the signatories. Tesla, a renowned automaker, was founded by Elon Musk.

The well-known technological company “Apple” was co-founded by Steve Wozniak. Award-winning artificial intelligence researcher Joshua Benzio and best-selling book “Sapiens” author Yuval Harari are also involved in this discussion.

The letter raised several concerns. These are the proliferation of disinformation, the fear of massive job losses, the replacement of human intelligence, and our loss of control over civilization. Also lack of sufficient time and management to adopt artificial intelligence.

After the recent emergence of ChatGPT and GPT-4, many people have chosen these two versions of artificial intelligence as a source of information. Herein lies the main problem of spreading false information or misinformation. Usually, we see an event from different perspectives. After observing a phenomenon from different perspectives, its accurate picture emerges. Also, in general, when there is doubt about the description of an event, we can obtain the description of the event from another source.

By verifying the truth this way, the possibility of getting the wrong report is completely reduced. But artificial intelligence has no such advantage. If any incorrect information is given here, it will reach millions of people. The over-reliance of humans on artificial intelligence systems will not require humans to go elsewhere to verify the truth. As a result, artificial intelligence cannot be ruled out as a platform for spreading misinformation or propaganda.

It was already predicted that many people would lose their jobs due to artificial intelligence. This assumption is not unreasonable at all. If a virtual platform can do the work that many people do, then an organization will naturally lay off excess human resources. It also involves the replacement of human intelligence and the loss of human control over civilization. It is feared that if human intelligence is replaced, then the importance of humans in the world will also decrease. Artificial intelligence will then take the place of humans. As a result, the orderly social structure of human civilization may be in crisis.

The initial request in the letter was for at least six months to adjust to the most recent AI developments. People have continually adapted to the development of technology from one generation to the next, as shown in the past. However, as more and more AI systems have been made available for innovation and public use, instability, and disarray have been observed. As a result, leaders in science, research, and technology have demanded proof of the amount of time needed to get ready to go from one stage to another.

The letter at the center of the debate has received no substantial response from artificial intelligence labs as of yet. A representative for Open AI stated that more powerful AI systems are not yet in the works. Many of the letter’s detractors assert that it is impossible to stop technological advancement. There is no guarantee that organizations working with artificial intelligence will so swiftly turn away from the road of new versions given the current excitement and enormous possibilities offered by these systems. However, if the proposed interim ban is approved, it will prevent a great deal of future upheaval and calamity.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button