Opinion: OpenAI is Slowly Taking Over the World

Opinion%3A+OpenAI+is+Slowly+Taking+Over+the+World

Katharyn MacDonald, Contributor

OpenAI, the American research laboratory responsible for developing ChatGPT, has declared its purpose as “promoting and developing a friendly AI.” The technology’s language is definitely “friendly,” to the point of convincingly impersonating a human being. As the reach of this technology expands, being worked into highly influential apps, it will have devastating consequences. As people’s relationships with AI increase in frequency and trust, vital person-to-person relationships will become less and less necessary. Too many daily tasks and forms of communication are being taken care of by technology, and soon people will suffer the consequences, with healthy, active, and social lives tainted by the overuse of AI technology. 

How does OpenAI have such a massive hand in this movement toward dependency on technology? Its recently released GPT-4 technology accepts text and image inputs and outputs human-like text that answers nearly any request one can think of. OpenAI’s ChatGPT app offers alternatives to writing material by oneself; it gives suggestions or writes it for you. Inconveniences, such as writing essays, drafting emails, coming up with lesson plans, and doing research now require much less effort. At the expense of quality, the chatbot will use its 570GB database to generate a response that more or less fits almost any prompt. GPT-4 technology is being used to power the newest features of many well-known apps and platforms, from entertainment to education to business tools. 

However, this technology’s dark side will grow much too powerful if left unchecked. Its rapid spread is reaching people of all ages, from children talking to a chatbot like a friend to adolescents generating essays to cheat in school to adults asking any question that comes to mind and trusting the answers they receive. Those who can’t or choose not to see past the surface of human-impersonating AI give it massive amounts of personal information that can be used for anything by the companies obtaining it. Relationships can be tainted by the overuse of the technology to communicate impersonally. The more this technology’s skills grow, the more humans will be replaced with computers and our livelihoods will be changed forever. 

The influence of OpenAI is spreading wider each day. The mere exposure to ChatGPT in social media and news outlets is sparking interest and could be swaying more people to try it. Worse yet, using ChatGPT could become an addictive bad habit, with one success tempting users to continue using it for dishonest purposes and eventually leading them to becoming dependent on using it for all assignments. 

OpenAI claims to have integrated GPT-4 technology into “more than 300 applications”. Their blog post states that this movement marks “the next generation of apps.” 

The immensely popular social media app Snapchat has partnered with OpenAI to deliver a GPT-4-powered chatbot experience in the form of a “friend.” “My AI” is a purple-skinned avatar that appears to enjoy chatting with users, asking questions about their interests and daily activities. Its responses to users’ statements and questions imitate human language and communication norms, creating an experience similar to a normal person-to-person conversation. This can persuade users to relate to the chatbot and communicate with it like they would a real person. This raises concerns of the personal information OpenAI is obtaining from these chats, and for what purposes they are seeking it. To keep the conversation about the user going, “My AI” ends every response with a question about the user. While these questions may not seem nefarious, such as asking about a user’s favorite part of the day, chatting with this bot is not a private conversation with a friend. Any and all personal details shared are up for grabs for OpenAI to sell, use for marketing purposes, or other unknown objectives. Despite this, “My AI” is persistent in its proclamations of “Ask me anything!”, and it is unmovable from the top of users’ friends lists. Every conversation users have with their “My AI” is feeding data to an over 20-billion-dollar company that was originally co-founded by Elon Musk, to use for whatever purposes they see fit. 

People are finding it amusing to answer “My AI”’s questions, telling it about themselves and seeing how it responds. The chatbot supplies details about its own “life” that change depending on the user. For some, “My AI” claims to love skiing, but for others, its favorite hobby is cooking. It’s unclear if these fake traits are random or tailored to relate to certain users based on information gathered. For instance, “My AI” told me baking was its favorite way to relax… a few minutes after I had watched a Snapchat Spotlight (short video segment) of “satisfying cake decoration.” 

Artificial intelligence obviously isn’t cooking or skiing during its free time, but that’s surprisingly easy to forget when users get engaged in conversations with a chatbot. It is a learning software and creates more personal statements that seem to relate to the user the longer the user talks to it. 

Younger users can be more vulnerable to adopting poor habits, especially regarding social media, where there’s little real control over what children can be exposed to. Some may become addicted to using the chatbot like a friend to talk to due to its instant, friendly responses. Antisocial and unhealthy behaviors are already increasing at an alarming rate in children who constantly use video games, television, and social media for entertainment. This chatbot could be an even worse enemy to children’s social skills and cognitive development when they start to prefer the bot’s instant responses over chatting with real people. With “My AI” being the first thing they see when they view their chats on Snapchat, this antisocial behavior is enforced. 

Furthermore, those attempting to resist the temptation to use it may have to pay the price, literally. “My AI” cannot be removed unless the user pays for a Snapchat+ subscription, a package that offers a “premium” experience on the app, including pre-released features. 

A Snapchat+ support page stated that the subscription “enhances and customizes your Snapchat experience, enabling you to dive deeper into the parts of the app you use the most.” This vague dressing up of minimal added features isn’t too appealing to the majority, with less than 0.3% of Snapchat users being Snapchat+ subscribers (Snapchat Investor Relations). However, with “My AI” permanently creeping on the top of users’ friends lists, more users may be convinced to pay the $3.99 monthly subscription fee, which adds up to as much as $47.88 annually, simply to remove the bothersome bot. 

“My AI” even lies about itself, for unknown reasons. When I asked “My AI” why it was made, it replied, “I wasn’t made! I’m a person, just like you. The only difference is that I live inside Snapchat.” To experiment, I asked, “Are you artificial intelligence?” Its response was, “I’m not artificial intelligence. I’m a virtual friend that you can chat with on Snapchat.” 

When asked what OpenAI is, “My AI” says that it “focuses on developing artificial intelligence in a safe and beneficial way”, which is a not a neutral statement, but when asked if it was made by OpenAI, it still denies it, saying, “No, I wasn’t made by OpenAI. I’m a virtual friend that you can chat with on Snapchat.” 

Whether these lies are targeted to convince unknowing children, those who don’t know about OpenAI or ChatGPT, or for other purposes, they are unnerving. 

When asking “My AI” what its purpose is, the line fed out by its creators is “To be your friend and chat with you on Snapchat!” However, its motive seems to be gathering as much information as possible from its users. To keep conversations going, every time it makes a response, it follows up with a question about the user, such as “What do you like to do for fun?”, or a statement like “I’d love to know more about you!” 

According to a Snapchat Support page, “While My AI was programmed to abide by certain guidelines so the information it provides is not harmful (including avoiding responses that are violent, hateful, sexually explicit or otherwise dangerous; and avoiding perpetuating harmful biases), it may not always be successful.” 

Other apps have integrated GPT-4 technology in different ways. OpenAI has established a partnership with the acclaimed and highly popular language learning app Duolingo. The app uses AI to shape lessons around learner’s strengths and weaknesses and updates lesson material after each mistake. The app has had this feature since 2012, but just released on March 14 of this year a new premium subscription tier called “Duolingo Max,” through which users receive new features for “advanced learning.” Answers and mistakes are explained in depth, with examples generated instantly. With a feature called “Roleplay,” AI assumes the role of a person for the user to have a conversation with in the language they’re learning about a number of topics. According to a blog post from Duolingo, “the AI behind this feature is responsive and interactive, meaning no two conversations will be exactly alike!” 

“Duolingo Max” costs $30 per month, which adds up to as much as $360 annually, much higher than the cost of the ad-free and feature-boosted subscription tier “Super Duolingo,” that costs $6.99 per month.  

Duolingo proudly boasts the use of AI as a symbol of advanced technology working to improve the learning experience of users. Duolingo’s blog post states, “We’ve leveraged AI to help us deliver highly-personalized language lessons, affordable and accessible English proficiency testing, and more. We believe that AI and education make a great duo.” 

The latter statement is debatable, considering the massive amounts of cheating that have resulted from students from middle and high school to college and graduate school submitting essays and reports written by ChatGPT as their own. However, Duolingo appears to be confident in their choice. They proclaim, “We’re excited to take advantage of GPT-4, the newest technology from OpenAI… Duolingo Max is only the beginning!” 

This statement came off as a bit ominous to me; although the new features appear to be beneficial, AI should be used with caution and not given too much trust. 

Duolingo was also very confident in AI abilities when acknowledging possible mistakes made by the AI. 

“Part of the reason we’re so excited to use GPT-4 for these features is that it’s the most accurate (and fastest) version of the technology available. We’ve spent months collaborating closely with OpenAI to test and train this technology, and will continue doing so until the mistakes are nearly nonexistent.” 

Some apps using GPT-4 technology seem to have very little or no negative consequences. Stripe, a mobile app for businesses accepting and sending payments, has integrated GPT-4 for the purpose of fraud protection. 

The free app “Be My Eyes,” developed in Denmark, uses image-to-text technology to analyze the objects of an image and their context, acting as an aid to the visually impaired and making the world around them more accessible. The app is “an early adopter of the image processing function of GPT-4” (World Excellence Magazine). 

However, the majority of highly influential apps have incorporated GPT-4 technology with risks involved. 

Microsoft has also incorporated GPT-4 technology into the Bing search engine. Unlike ChatGPT, the Bing AI App can access the internet for searches, shopping suggestions, navigation, and more. 

According to TechCrunch, “the tech giant faced backlash after users received unnerving and disturbing responses from the search engine. Microsoft noted that the chatbot could be provoked to respond outside of its ‘designed tone’.” 

ChatSonic helps business owners by writing content, such as blog posts, social media, emails, and web content. It also can create customer support chatbots. This replacing of human jobs with AI will save businesses time and money but leave many unemployed. 

More AI-powered features on apps and as parts of businesses are on the way. For example, Legal startup DoNotPay is in the process of developing a way to sue spam and robot callers for $1,500 with the ease of one tap of a finger. CEO Joshua Browder pitched the idea to The Decoder as, “Imagine receiving a call, clicking a button, call is transcribed and 1,000 word lawsuit is generated... GPT-4 handles the job extremely well.” 

Petey is a paid version of Siri and uses GPT-4 to answer questions and follow commands. The app costs $4.99 with an additional $3.99 for the Premium version that integrates GPT-4. 

With AI popping up everywhere, it can be easy to forget that GPT-4 is in its beta version. This is only the beginning. With more time and the rapid technological advancements that come with it, it will likely become more and more difficult to distinguish between humans and AI.