A Series on Generative AI: Is Academic Integrity a Thing of the Past?

The capabilities of, and concerns around, artificial intelligence (AI) tools and applications such as Generative AI are rapidly growing. Some industry well-known authorities in this spaceare concerned that the speed with which tools are being developed and deployed is dangerous. Generative AI is a publicly available online artificial intelligence (AI) chatbot with super (processing) powers. Generative AI was created by OpenAI, which released its first public AI tool known as GPT-2 in 2019 in the hopes of gathering data and feedback to improve the tool’s accuracy and use. 

Fast-forward to Nov. 30, 2022, when OpenAI released GPT-3.5. It became an overnight sensation with over 100 million monthly active users reported by January 2023, according to a UBS study. Since then, OpenAI has released a paid version, GPT-4, which can understand visual inputs in addition to text, something the prior versions were unable to do, and is promoted as providing “safer and more useful responses.”  

Now, while the algorithms behind the chatbot are complicated, its use is simple, and its possibilities are seemingly endless. Generative AI has passed the Bar Exam, U.S. Medical Licensing Exam, SAT, GRE, AP exams, MBA finals, and three of the Sommelier exams, among others, according to OpenAI. After creating a free log-in (for version GPT-3.5), users can also have access to such information. You simply submit a question, request, or statement into the chatbot, and Generative AI will generate a response that could be challenging to distinguish from human-generated text.

Since its release, Generative AI has become widely popular around the world. But like any new technology, it has been met with staunch criticism and somewhat understandable fear. According to a recent Forbes article, more than five major U.S. banks, as well as the Los Angeles, Seattle, and New York City public school districts, have restricted employees’ and students’ access to the AI tool. More recently, the United Kingdom announced plans to regulate the chatbot, and Italy became the first western country to ban Generative AI altogether. 

What Has Regulators, Ethicists, Researchers and Educators Around the World So Concerned? 

From an academic, specifically higher education, perspective, there appears to be no shortage of potential concerns regarding academic and research integrity. Generative AI use cases are endless: You can write entire college admissions essays, outline and cite academic papers, generate project ideas, develop class syllabi, answer test questions and more. The possibilities for violating college and university academic integrity policies and students’ codes of conduct are plentiful. 

A poll of Stanford University students conducted by The Stanford Daily found that “17% of Stanford student respondents reported using Generative AI to assist with their fall quarter assignments and exams.” In addition, one online course provider, Study.com, asked 1,000 students over the age of 18 whether they used Generative AI. The survey found that 89% of respondents had used the chatbot to complete homework. Furthermore, in a recent BestColleges survey, of 1,000 current undergraduate and graduate students, 43% had used Generative AI. But regardless of the poll referenced or the campus surveyed, it is abundantly clear that the use of Generative AI has quickly become prevalent within (and outside of) academia.

Is Academic Integrity a Thing of the Past? 

Academic integrity is broadly described as demonstrating honest and ethical behavior in scholarly work and academic activities. Academic integrity is intended to support and protect students’ opportunities for learning. With the introduction of Generative AI and its many uses in the academic setting, the concerns are far greater than violating school policy. While solely using Generative AI results and labelling them as one’s own could be deemed unethical, academics and non-academics alike are also concerned about the potential impact on a student’s overall educational experience. 

Will Generative AI undermine the college admissions process, helping students “write” the perfect essay? Will students (and the future work force), be less adept at critical thinking after they so heavily rely on Generative AI for all the right answers?

Alternatively, no person or chatbot is perfect, right? Not all of Generative AI's responses and answers are necessarily correct. Could this result in the spread of biased information, as GPT-4 responses are based only on the information with which the AI was “trained”? Some information could simply be inaccurate from online sources that were not properly vetted or verified. Students who blindly trust Generative AI will not necessarily score well on all papers and exams, as the information Generative AI presents requires independent review and validation. 

But not all students and professors fear the potential negative impacts of the use of chatbots. In fact, according to the BestColleges survey, 20% of students do not view the use of the tool as cheating or plagiarism. Additionally, the survey found that 48% of students agreed that “it is possible to use AI in an ethical way to help complete my assignments and exams,” more than twice the percentage (21%) who disagreed.

And some schools and professors agree—as long as the appropriate policies and guardrails are in place. While some schools are phasing out take-home tests and open-book assignments and reverting back to in-person learning and exams, sans computers or technology, others are embracing the change. In fact, one University of Pennsylvania Wharton School of Business professor is actually requiring their students to use the tool. This professor views the use of AI as an emerging skill that needs to be learned and perfected and emphasizes that all students are responsible for verifying Generative AI results. A Northern Michigan University professor is using the tool to transform essay writing in his courses by requiring students to write their first draft in the classroom but allowing for revisions and updates with the help of AI, as long as they can be sufficiently supported and explained. 

But, like with most rapid changes in technology, our regulations, systems, and processes need some time to catch up and there will likely be no one-size-fits-all approach to managing the risks of AI in higher education. Until then, we will continue to monitor the situation to better understand the impacts and responses at colleges and universities. 

How Does Generative AI Compare?

After authoring our own article, we asked Generative AI to draft an article using the same prompt, and the results were remarkable. We encourage you to do the same by copying and pasting the following prompt into Generative AI: Please write an article on the potential impact of Generative AI in higher education as it relates to academic integrity. Take a look and let us know how Generative AI's article compares.