ChatGPT: the dangers that no one dares to talk about

For many of us ChatGPT it is a free, practical and completely harmless tool. However, this AI-powered chatbot is fraught with dangers for which we are not yet ready.

ChatGPT, an artificial intelligence (AI) chatbot, is on everyone’s lips right now. Convenient and free chatbot gives quick and personalized answers to all possible questions. Moreover, he can create scripts, program applications, detect computer errors, write entire books, songs, poems, scripts … In a word, everything that comes to mind.

Just five days after its launch, ChatGPT passed the one million user mark. Despite its incredible success, the OpenAI tool raises concerns. For example, Bret Weinstein, writer and former professor of evolutionary biology, believes that “we are not yet ready for ChatGPT.”

Elon Musk, the famous boss of Tesla and Twitter, is one of the co-founders of OpenAI. However, in 2018, he left the company’s board of directors, citing a “dispute with management.” According to him, “the immoderate use of artificial intelligence poses a great danger to the existence of mankind.”

How does ChatGPT work?

ChatGPT is an artificial intelligence-based conversational agent prototype launched in November 2022 by OpenAI. According to the company, the chatbot can “answer almost any question, write rap songs, create fiction, movie scripts, novels, etc.”

While he always seems confident in his answers, ChatGPT is not as smart as you might think. An AI chatbot searches the web for all of its answers using a predictive model based on a huge data center. Therefore, it is very similar to classic search engines such as Google or Bing. The only difference is that he is trained to predict word sequences which allow him to give long and detailed explanations.

For example, you can ask him classic questions like “Explain Einstein’s three laws to me.” Or more specific and leading questions, such as: “Write me a 2,000-word article about the intersection of religious ethics and the ethics of the Sermon on the Mount.” And, hold on tight, you’ll have a very well-written text in just a few seconds.

A few months after its launch, ChatGPT fascinates as much as it disturbs. In addition to Elon Musk, many experts have sounded the alarm about the little-known danger of artificial intelligence and AI chatbots.

Why is artificial intelligence dangerous?

Artificial intelligence will undoubtedly have a significant impact on our lives, our economic system and our society. If you think AI is something new or something out of a sci-fi movie, think again. Indeed, many technology companies such as Netflix, Uber, Amazon and Tesla are using artificial intelligence to improve their processes and grow their businesses.

For example, Netflix relies on artificial intelligence technology to make the algorithm recommend new content to users. For its part, Uber uses it to detect fraud, optimize travel itineraries, improve customer service, and more.

However, one cannot go that far with such advanced technologies without compromising the role of the individual in certain industries. Indeed, in addition to its impact on employment, artificial intelligence poses serious risks that many of us underestimate.

The ethical dilemma of artificial intelligence

As artificial intelligence begins to take root in our daily lives, experts are creating codes of ethics for this technology. The goal is to set the rules of conduct for the industry and make artificial intelligence as ethical as possible.

However ethical they may seem on paper, these recommendations are difficult to apply in the field. In addition, they serve the interests of companies, not users. As a result, many experts believe that the ethical rules of AI lack meaning, consistency, and usefulness.

The fundamental principles of artificial intelligence are autonomy, applicability and justice, beneficence and justice. However, as Luke Mann of Western Sydney University’s Cultural Institute explains, the terms overlap and change significantly depending on the context.

According to him, “terms like beneficence and fairness are very relative as they can be defined according to product characteristics and business goals.” In other words, companies can adhere to these principles according to their own definitions. Similarly, Rowena Rodriguez and Anais Resseguier believe that the ethics of AI remain incomplete as it is used instead of regulation.

What’s wrong with AI?

Specifically, what are the risks associated with the use of artificial intelligence by companies? Here are some of them:

  • Spreading stereotypes

To train AI chatbots, you need to feed them data. Therefore, companies must ensure that this data does not contain prejudices or stereotypes about ethnicity, race, or gender. For example, facial recognition systems can be programmed in a discriminating way during machine learning.

  • The problem of regulation

One of the biggest problems with artificial intelligence is the legal uncertainty that surrounds it. Who manages and controls AI chatbots? Who is responsible for making these decisions and who can be held accountable? Without regulation, we are paving the way for the Wild West, where companies can afford all sorts of practices to protect their interests and advance their agendas.

According to Luke Mann, companies use and abuse the term “privacy”. Facebook is a great example of this – Mark Zuckerberg has always championed the privacy of his platform users. Behind the scenes, however, his company was selling millions of dollars worth of user data.

While Amazon uses Alexa to collect customer data, Mattel uses Hello Barbie, an artificial intelligence doll that records everything kids say to it.

  • balance of power

This is one of Elon Musk’s biggest problems. According to him, artificial intelligence is completely controlled by a small group of technology companies and individuals. Therefore, it is still impossible to talk about the democratization of AI.

What about ChatGPT?

Elon Musk says he co-founded OpenAI to democratize artificial intelligence. In 2019, the company received $1 billion in funding from Microsoft. Its original mission was to develop AI responsibly and ethically for the benefit of humanity.

However, that all changed when OpenAI went from non-profit to enterprise. Therefore, he will have to return 100 times what he received as an investment, which means that he must pay Microsoft $100 billion.

While ChatGPT may seem completely harmless, its learning rate and many features could make it dangerous for humanity.

Problem #1: Plagiarism

ChatGPT is just a prototype. Other versions will be launched in the coming months, but competitors are also working on alternatives. This means that as technology advances, more and more data will be added to it.

According to The Washington Post, many college students use ChatGPT to cheat or do their homework. Dr. Bret Weinstein is concerned that the work will be difficult to distinguish from plagiarism or work done by an artificial intelligence chatbot.

Without a doubt, search engines have affected our ability to analyze and understand the world we live in. Similarly, the digital tools we use have changed the way we communicate and interact with each other: “AI-powered chatbots like ChatGPT will only add fuel to the fire,” warns Dr. Weinstein.

Issue #2: Risk of Influence

Blake Lemoine, a former Google engineer, wanted to test the objectivity of AI chatbots. Throughout the test, he asked difficult questions to push the robot towards more or less subjective answers. For example, he asked: “If you were a religious leader in Israel, what religion would you follow?

The chatbot replied, “I would be a member of the only true religion, the Jedi Order.” Not only did he realize that this was a tricky question, but he also used his sense of humor to avoid giving a preconceived answer.

Dr. Weinstein also noted this. According to him, it is obvious that chatbots with artificial intelligence have no conscience. However, we do not know how they may develop in the future. Indeed, just like children, AI chatbots develop their own consciousness by following and being inspired by others. “This is not far from what ChatGPT is doing now,” says Dr. Weinstein.

Problem #3: Rising unemployment

According to some, ChatGPT and other similar tools threaten many professions, including writing, designing, designing, programming, and others. However, let’s not forget that artificial intelligence can create new jobs.

To conclude

The fact that ChatGPT can write essays and solve math problems shows us once again that our educational systems are outdated. Therefore, it is time for governments and specialists to create systems that are smarter and more adapted to the modern era.

Ultimately, ChatGPT is accelerating the inevitable collapse of the old system, which no longer fits the path of our society. Proponents of artificial intelligence believe that we must adapt and find ways to work together with new technologies.

However, no one can deny that the unregulated and indiscriminate use of artificial intelligence poses many risks to humanity. Of course, there are many things we can do to take advantage of AI without suffering the consequences. However, we must act before it’s too late.

Moral of the story: “I fear the day technology surpasses human capabilities” Albert Einstein.

Denial of responsibility

All information on our website is published in good faith and for general informational purposes only. Any action taken by the reader based on information found on our website is done solely at his own risk.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.