Credit: Starryai

The first artificial chatbot was a digital therapist called ELIZA that passed words into a computer program and paired them with a list of scripted responses. Its founder, Joseph Weizenbaum, said his aim was to caricature human conversations, not recreate them. When people interacted with ELIZA, Weizenbaum was shocked by how openly people shared their thoughts with the program. Though Weizenbaum was skeptical about a machine’s capacity to mimic human speech, some experts at the time believed chatbots would be indistinguishable from humans within just a couple years. That was in 1966.

Nearly 60 years later, that prediction may be coming true. Last November the most sophisticated chatbot in history was released with ChatGPT. The program responds to user-generated prompts and spits out responses based on information it scrapes from the web. By January ChatGPT gained over 100 million users, making it the fastest-growing consumer software in history.

The surprise-hit software created a buzz in several industries. Writers, coders and communicators worried an AI language model could replace them in the workforce. Health care providers warned that the software could provide people with inaccurate medical information. A personal injury lawyer was caught using ChatGPT to write a legal motion — in which it referenced several nonexistent legal cases. Perhaps where there’s been the most concern is in education, which could outsource the skills of writing to software.

These AI-generated images are the result of a human prompt to show the influences of technology in the classroom. Notice the extra legs in the top image and the backward monitors in the bottom image? Credit: Courtesy of Fotor AI Image Generator

A survey from January found nearly one-third of college students have used an AI chatbot to complete a written assignment, and more than 15% say they used it for more than half of their writing assignments. With such a massive shift in such a short time, schools haven’t had time to keep up.

Both Bend-La Pine Schools and the Redmond School District don’t have specific policies for AI use in schools. Officials at Oregon State University-Cascades, following the guidance of the larger OSU system, said instructors should set clear expectations for when AI learning tools are allowed. Central Oregon Community College formed a discussion group that will start meeting during the fall semester.

“I don’t think anyone can actually answer how profound the impact will be,” said Justin Jory, a writing professor at COCC who will co-lead the discussion group. “But there is a sense that it is a game changer, there is a growing number of educators who feel that this is something that you basically can’t ignore.”

The Hallmarks of AI

ChatGPT’s ability to opine on just about any subject is impressive, but its prose has been described as formulaic and soulless. Professors can note a change in a student’s typical voice or give it a smell test to see if writing is original. It can still be difficult for educators to tell, with the endless prompts one can feed a software, and with the different language models used on different platforms.

“You can get a sense of the voice of AI, and they’re different. Google’s Bard is different from ChatGPT. And they also make mistakes, sometimes not obvious ones,” said Ariel Mendez, a Bend city councilor and political science instructor at OSU-Cascades. “I kind of love that as a professor, because it gives me some reassurance that we can still outsmart AI, but I don’t know if that’s going to last forever.”

OpenAI, the company behind ChatGPT, acknowledged the program’s occasional mistruths. It found that the earlier models are truthful in response to 58% of questions, and that false answers often echo popular misconceptions. Other instances, it may write a generic response rather than a factual one. But, the models are improving according to the company and personal experience. In February I asked ChatGPT to write a biography for me, which falsely claimed I was raised in a small town, that I am “one of the most respected and credible journalists of [my] generation” and that I’m a “true inspiration to journalists everywhere.” When I asked again in July, it brought me back down to earth.

Credit: Courtesy of Fotor AI Image Generator

“As of my last update in September 2021, I do not have information on a prominent journalist named ‘Jack Harvel,'” ChatGPT cruelly and accurately wrote. “It is possible that Jack Harvel is a journalist who gained recognition after my last update, or he may be a journalist working in a specific region or niche that hasn’t received widespread attention.”

Some programs claim they can detect AI writing by measuring characteristics of a text, mainly how predictable it is. These tools aren’t guaranteed to catch AI and may give false positives for written samples.

“My colleagues produced something original, they submitted it to an AI detection tool, and it came back as written by an AI. That tells me that the AI detection tools are not great at the moment. They’re not, in my opinion, trustworthy. I wouldn’t hinge my student assessment or my assessment of student work on an AI detection tool at this moment,” Jory said.

CopeGPT

With no foolproof way to detect AI-generated content, educators are now tasked with reworking assignments to be less amenable to the technology. Alix Gitelman, the vice provost for Academic Affairs at OSU, said faculty are already reworking their assignments to questions that couldn’t be scraped online — things like, “write two paragraphs about last week’s in-class discussion.” Educators are also tasked with being more upfront about the appropriate use cases of AI.

“Our faculty are really wanting to encourage everybody to be more proactive about talking about AI upfront, like, have a conversation with students about what its benefits are and what some problem areas are,” Gitelman said.

There’s also AI as a tool for education rather than a way to plagiarize. While it might be unethical to copy and paste a response for written questions, the line gets blurrier when a student isn’t being evaluated for what they use an AI tool for. She said OSU found one student’s use of AI to generate code for various climate simulation models to be ethical because the student’s thesis was about the impact on the climate, not the creation of a simulator. It likely saved the student weeks of time.

“I think it raises the question for educators of, what are the really important things that we want students to learn? And, those things are going to remain: critical thinking, how to communicate. You can’t have a bot or an AI do that for you,” Gitelman said. “It’s got a lot of people trying to sort out both the ethics of it and the utility of it.”

Credit: Courtesy of Fotor AI Image Generator

Technological shifts often cause a stir in education. For years people argued against allowing calculators in schools, arguing that they diminish students’ math skills. The other side argued that calculators deepen students’ knowledge about mathematical concepts while showing little impact on arithmetic skills. The advent of the internet made it easier to plagiarize, and easier to check if work was stolen from a third party.

“Technologies are going to be here and we’re going to continue to use them. You can’t really push back against them, oftentimes,” Jory said. “I think the uses for these and the ways that we’re allowed to use will be different across different disciplines. And that’ll be dependent on those disciplines’ relationships to technologies.”

Not all of Jory’s colleagues agree on the potential utility of AI writing tools. Almost half of the K-12 educators surveyed in an EdWeek poll thought AI would have a negative impact on education compared to 27% who thought it was a positive change. Jory, though, says there are more questions than answers when it comes to AI in schools.

“I don’t feel like I could really put policy or practice into place at the moment, I just have too many questions like, ‘What is this thing? How does it work? How can I use it as an individual?” he said.

While those questions remain unanswered, Jory doesn’t advise students on how to use AI in a classroom setting. Others are more bullish, relying on the detection tools, flawed as they are, and their own intuition to determine whether a student’s writing is authentic.

“I’m approaching the classroom differently,” Mendez said. “Essentially, if there is an opportunity to cheat or plagiarize using AI, I let students know that there are detection tools, but that ultimately, I exercise my own judgment, and take a look at the consequences from that perspective.”

Educators will need to continue following AI-language models and keep searching for the best avenues to regulate it, because the technology isn’t going anywhere. Research and Markets, a firm studying economic trends, estimates that the generative AI market is projected to grow five-fold in the next five years to an estimated $51 billion market by 2028.

Related Stories

Is AI Art Bad?

A designer’s opinion on the pluses and minuses of AI art generators
$
$
$

We're stronger together! Become a Source member and help us empower the community through impactful, local news. Your support makes a difference!

Creative Commons License

Republish our articles for free, online or in print, under a Creative Commons license.

Trending

Jack is originally from Kansas City, Missouri and has been making his way west since graduating from the University of Missouri, working a year and a half in Northeast Colorado before moving to Bend in...

Join the Conversation

2 Comments

  1. Why is there even a discussion? Our college district vets each submission through plagiarism checkers; most submissions generated by AI are flagged as plagiarism since their content is scraped indiscriminately from existing sources. It’s an auto fail.

    AI should only be used once a student has a full grasp of the basic concepts of both the English language and the subject for which a paper is being submitted. AT BEST, it should be a starting point for the student to begin composing their own, original works and the sources vetted by the student.

  2. A student who uses AI for a writing assignment is saying “I’m not smart enough to write a good paper, or I don’t have the time to do the research.”

    Expressing thoughts in words is a basic skill that all students should develop. Many students have learned awesome technical skills outside of the classroom. But If they communicate primarily in abbreviations and emojis on their phones, they lose sight of how a sentence works. Perhaps they should spend some time in the company of books.

Leave a comment

Your email address will not be published. Required fields are marked *