The Rise of ChatGPT
May 23, 2023
What if students in schools never had to write an essay again, or teachers a lesson plan or writing prompt? What if student writing, which was supposed to develop skills and prepare for life after high school, was not written by the student at all, but a computer?
ChatGPT, a chatbot software that can answer any prompt using artificial intelligence, was released to the public in November of 2022. Its developer, OpenAI, is a research laboratory based in the USA with non-profit and for-profit divisions and a purpose of “promoting and developing a friendly AI.” The chatbot responses can certainly be considered “friendly,” as they imitate human language and communication norms, creating an experience similar to a normal conversation. The vast abilities of ChatGPT include responding to prompts with essays, summaries, blogs, cover letters, resumes, emails, poems, and more. This article could be written by ChatGPT.
The program has crept its way into popularity and a high level of influence, providing partial solutions to problems and inconveniences of the American people, such as the need for quick answers or writing suggestions. According to TIME, it has “built the fastest-growing user base of all time,” surpassing TikTok and Instagram.
The educational system undertook a monumental shift at ChatGPT’s release, with concerns raised about how the writing tool could be misused.
A significant concern within the educational system is of academic integrity. The human-like writing of the chatbot could be submitted by students instead of their own original work in as simple a motion as copy and paste. Rather than plagiarizing from another student’s essay or an online database, submitting an AI-written essay avoids being noticed as unoriginal work at a first glance. After all, the work is one-of-a-kind; it’s just not written by the student, but by a computer.
English teachers at CHS use the platform TurnItIn to assist in uncovering plagiarism. Recently, the platform has added an AI-checker feature to detect material written by ChatGPT or other AI tools. The platform released information on this update stating that the AI-checker works to promote authenticity and integrity.
TurnItIn’s blog post about the new development, in which they report to have been developing this technology for the past two and a half years, says, “Robust reporting identifies AI-written text and provides information educators need to determine their next course of action. We’ve designed our solution with educators, for educators.”
CHS teachers now have access to this tool that supplements plagiarism checks and attempts to hold up values of integrity with the education system.
TurnItIn’s AI-checker is one of many brand-new technologies that claim to detect ChatGPT. One of the first, released on January 2, 2023, is the app ZeroGPT, developed by 22-year-old Princeton computer science student Edward Tian.
The checkers being released in a hurry to face the rapid spread of ChatGPT raises questions of reliability. TurnItIn acknowledged the possibility of false positive results when checking for AI-written material, and an article on Tian’s ZeroGPT by The Guardian claimed that “users cite mixed results.” Teachers have a tough choice on who to trust: the AI-checker or a student claiming their work is original.
As AI-checkers advance, OpenAI is making advances of its own. ChatGPT learns not only how to impersonate human writing better with every user, but gains personal information about people that can be obtained by OpenAI and used for anything. Also, according to Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales, users can ask the chatbot to rephrase its writing, to add more “randomness.” or to “obfuscate with different synonyms and grammatical edits” to attempt to bypass checkers.
Another concern raised by ChatGPT’s release is of misinformation or biased points of view being taken as factual when presented in the bot’s writing. Valid research is unbiased and avoids partisan sources of information that highlight selective aspects or present opinions instead of facts. ChatGPT is known to pull its information from “570GB of data obtained from books, webtexts, Wikipedia, articles and other pieces of writing on the internet” (BBC Science Focus). Biased reporting and misinformation are abundant in many sources, and Wikipedia is known to be an unreliable source for in-depth research due to its ability to be edited by almost any user. When ChatGPT is writing an argumentative or persuasive essay, it uses this data for all of its information and logic. When it is using sources, it exclusively uses this limited set of data for citations. Users do not know what kinds of sources, reliable or unreliable, that ChatGPT pulled their information from for a given writing piece.
Misinformation could be spread very easily if one trusts ChatGPT too much, thinking that the bot’s abilities to “answer any question” extend to knowing the most accurate answer to every question.
ChatGPT’s capabilities also include creating art, such as drawings and songs. The numerous functions and their effects and consequences span too great of an area to discuss in one article. In summary, the age of human tasks being overtaken by artificial intelligence can no longer be contemplated as a future event. It is happening on massive scales right now.
Anne • Jun 3, 2023 at 10:22 am
Written well. Article has good, in-depth research and details.
Ken • May 28, 2023 at 7:23 pm
Great overview. Looking forward to the next article.