ChatGPT: mixed feelings about OpenAI bot

ChatGPT, OpenAI’s chatbot, has captivated with its capabilities as much as it has raised fears since its launch late last year. If he writes philosophical dissertations, writes lines of computer code, and helps with medical research, he may also encourage cybercrime, devalue the arts, or even encourage mediocrity in school.

Launched in late November by California-based startup OpenAI, ChatGPT has become a global talking point in two months. And there is something! This artificial intelligence has extraordinary abilities that would make a Parisian say that he is making a “revolution comparable to the advent of the computer.”

AI with multiple mastery

Indeed, thanks to the phenomenal amount of data gleaned from the Internet, this conversational robot is able to play the role of an expert in any subject. He can write a great poem or newspaper article, write a very good philosophical and literary dissertation, create an incredible painting or song, create a chef’s recipe, send compelling emails, create a complex computer program, etc. All in a matter of seconds!

In addition, ChatGPT will help in medical research. Indeed, according to a study published by researchers at Drexel University in Pennsylvania, GPT-3 (the program used by ChatGPT) can identify certain cues in a person’s writing to help identify early stages of Alzheimer’s. The bot would reach 80% efficiency. Just unbelieveble!

To promote mediocrity

ChatGPT is as much of a public interest as it is available to the general public. While usually advances in the field of artificial intelligence remain in the scientific field. But this openness to all rightly raises concerns. Its use can be perverted on a larger scale. As evidenced by the problem she already poses at the academic level. This allows students to effortlessly write assignments and students to receive dissertations with a single click.

In this way, ChatGPT allows pupils and students to cheat more accurately than using Wikipedia and Google searches. As noted by a professor from Lyon, who discovered the similarity between several copies of his students. Backed into a corner, the latter admitted that he resorted to artificial intelligence OpenAI. This practice also turns out to be dangerous due to the fragmentation or not always verifiable knowledge. The data integrated into the tool also stops at 2021. They haven’t been updated yet.

Possibility of malicious use

Moreover, ChatGPT is universal, so there is a real risk of spreading fake news or conspiracy ideas. Thus, this is another way to pollute the Web. In addition, the ease of use of new artificial intelligence and its capabilities may increase cybercrime. Indeed, a conversational agent is able to write a virus itself and turn anyone into a hacker. For example, amateur cybercriminals can use it to create “phishing” messages.

Ineffective security measures

Sensing these dangers, OpenAI has attempted to put in place several security measures to prevent malicious use of its AI. In particular, the startup made sure that it is impossible to write code to create ransomware from certain countries, including Russia, Iran, China, Ukraine, and others. But cybercriminals live all over the world. In addition, they always have solutions to get around the geographic ban. So we’ll have to look for something else.

Consider this chatbot as a help

In the meantime, we can only rely on the common sense of users. For example, in academia, students should use ChatGPT as a prompt, such as Google translate. In other words, you must draw inspiration from what this AI produces in order to make up your own homework. Realizing this, Google plans to integrate it with Word, Excel, and Powerpoint to make the user experience easier. This can save time and efficiency.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker.