Essential Considerations for Addressing the Possibility of AI-Driven Cheating, Part 2


Please refer to Part 1 for the six essential considerations for addressing AI-driven cheating. Part 2 discusses how you can redesign assignments using the TRUST model to serve as a pedagogical tool.

Redesigning assignments can reduce the potential for cheating with AI. Students are more likely to cheat when there is a stronger focus on scores (grades) than learning (Anderman, 2015), there is increased stress, pressure, and anxiety (Piercey, 2020), there is a lack of focus on academic integrity, trust, and relationship building (Lederman, 2020), the material is not perceived to be relevant or valuable to students (Simmons, 2018), and instruction is perceived to be poor (Piercey, 2020).

You can redesign assignments to address many of these issues. I came up with the TRUST model to serve as a pedagogical tool for redesigning assignments so that students will be less likely to turn to AI for cheating. It stands for:

  • Transparency
  • Real World Applications
  • Universal Design for Learning
  • Social Knowledge Construction
  • Trial and Error

Transparency refers to making the purpose and requirements for the assignment as clear as possible. Students have always questioned the value of the content, assignments, and activities in their courses (“Why do I have to learn this?!”). Now, students are wondering why they have to do assignments that AI chatbots could do for them, especially when AI could do the work, like write an essay or research paper, in just a few seconds. 

Students are rarely told why they have to do papers, projects, essays, discussion forum posts, or other assignments other than “to get a good grade.” While grades provide extrinsic motivation for some students, not all students are driven by the sole purpose of getting a good grade. Students want to know why they are being asked to do what you have assigned them to do. If you make this clear to them, you might find that they will find more value in the assignment and be less motivated to cheat with AI. Additionally, if you clearly outline the steps needed to complete the assignment, students might feel more confident that they can complete the assignment and be less likely to resort to cheating. 

For my assignments, I outline the purpose at the top of the assignment document (see Figure 2) and then I use the checklist feature in Google Docs to provide the step-by-step directions (see User Experience Research Project document as an example). I ask students to make their own copy of the document so they can check off items as they complete them. This helps with executive functioning and improves motivation. To learn about making assignments more transparent, explore the Transparency in Learning and Teaching (TILT) Framework

Figure 1: Screenshot of the top of my User Experience Research Project document

Real World Applications is about making your assignment as applicable to the real world as possible. There are several ways to do this – you could ask students to participate in a civic engagement project, design an open educational resource, build a working prototype of an invention, partake in a service learning activity, create a social media campaign, teach or tutor younger students, or address one of the United Nations Sustainable Development Goals. For example, in an Ancient History class, students could design social media videos to excite interest in the topics they are studying (see “Teens Are Going Viral With Theatrical History Lessons on TikTok”). Or, in an Italian Studies class, students could create an open access eBook that teaches younger students about the Italian language and culture (see “Empowering College Students to be OER Creators and Curators”). Assignments with real world applications can help students see that the material is relevant and valuable to their own lives and to others, and potentially reduce the likelihood of turning to AI for cheating.

Universal Design for Learning (UDL) refers to the framework that focuses on reducing barriers and increasing access to learning (CAST, 2018). The framework has three main principles: Multiple Means of Engagement, Multiple Means of Action and Expression, and Multiple Means of Representation. Using UDL as a framework for redesigning your lessons can improve student interest, engagement, and motivation for learning, which in turn, can reduce student’s inclination to turn to AI for cheating on an assignment. To learn more about this framework, read UDL: A Powerful Framework and explore the UDL on Campus website.

Social Knowledge Construction is about giving students the opportunity to deepen their understanding of the class content through interactions with others. I often tell my students that nearly all learning experiences have a social component, whether it involves reading text written by others, watching videos or presentations designed by others, communicating with others, and even observing others. Yet, many college assignments lack the opportunity for students to construct knowledge with others. This does not mean that you have to (or even should!) assign group projects, there are many ways that you can redesign an assignment to include social knowledge construction. My favorite thing to do is having students invite others to participate in the assignment. For instance, in the User Experience Research Project mentioned above, students have to find 3-5 peers to conduct usability testing of an educational digital tool and they present this data in their final report. Another way to bring social knowledge construction in is to encourage students to get feedback on their assignment from, or to share what they learned from the assignment with, individuals outside the class (see Figure 3). Encouraging learning through social knowledge construction can increase the relevance and value of an assignment, and ideally, reduce instances of cheating.

Figure 2: Screenshot of the Social Engagement section in the User Experience Research Project Final Reflection document

Trial and Error is about giving students the opportunity to learn through failure. Students can often learn more from productive failure than from success (Sinha & Kapur, 2021). But, typically, when students fail, they don’t get a chance to learn from their mistakes, like redoing an assignment or retaking a quiz. When failure is a normal part of learning, rather than the final outcome, students might feel less pressure, stress, and anxiety when doing assignments because they know they will have a chance to fix any errors; and therefore, they may be less likely to turn to AI to cheat. In my classes, if students fail part or all of an assignment, I give them feedback on how to improve their grade and then give them additional time to revise and resubmit their work. While this might not be feasible in a large class, there are other ways you can incorporate trial and error in large classes, like having low-stakes quizzes that can be taken multiple times to demonstrate mastery of learning rather than high-stakes one-short midterms and final exams.

In summary, when assignments are redesigned to be transparent in purpose, value, and requirements, feature real world applications of knowledge, align with the Universal Design for Learning principles, encourage social knowledge construction, and allow for learning through trial and error, this may address many of the issues that cause students to turn to AI for cheating.

While the launch of ChatGPT spurred panic and increased fears about student cheating, there are things that should be done and things that should not be done when addressing the potential for student cheating with AI. This article presented six key points to consider when navigating the role of AI in aiding student cheating: 1) the potential impact of banning AI chatbots on the digital divide, 2) the risk of creating inaccessible and discriminatory learning experiences by banning technology for exams, 3) the limitations of AI text detectors, 4) the importance of redesigning academic integrity statements to address AI use, 5) the need to provide opportunities for students to learn with and about AI, and 6) the ways to redesign assignments to reduce the temptation to cheat with AI.

Torrey Trust, PhD, is an associate professor of learning technology in the Department of Teacher Education and Curriculum Studies in the College of Education at the University of Massachusetts Amherst. Her work centers on the critical examination of the relationship between teaching, learning, and technology; and how technology can enhance teacher and student learning. In 2018, Dr. Trust was selected as one of the recipients for the ISTE Making IT Happen Award, which “honors outstanding educators and leaders who demonstrate extraordinary commitment, leadership, courage and persistence in improving digital learning opportunities for students.”


Anderman, E. (2015, May 20). Students cheat for good grades. Why not make the classroom about learning and not testing? The Conversation.

Brewster, J., Arvanitis, L., & Sadeghi, M. (2023, January). The next great misinformation superspreader: How ChatGPT could spread toxic misinformation at unprecedented scale. NewsGuard.

Canales, A. (2023, April 17). ChatGPT is here to stay. Testing & curriculum must adapt for students to succeed. The 74 Million.

CAST (2018). Universal Design for Learning Guidelines version 2.2.

Currier, J. (2022, December). The NFX generative tech market map. NFX.

Gegg-Harrison, W. (2023, Feb. 27). Against the use of GPTZero and other LLM-detection tools on student writing. Medium.

GPTZero. (n.d.).

Ingram, D. (2023, Jan. 14). A mental health tech company ran an AI experiment on real users. Nothing’s stopping apps from conducting more. NBC News.

Kirchner, J.H., Ahmad, L., Aaronson, S., & Leike, J. (2023, Jan. 31). New AI classifier for indicating AI-written text. OpenAI.

Lederman, D. (2020, July 21). Best way to stop cheating in online courses? Teach better. Inside Higher Ed.

Lucariello, K. (2023, July 12). Time for class 2023 report shows number one faculty concern: Preventing student cheating via AI. Campus Technology.

Mollick, E., & Mollick, L. (2023). Assigning AI: Seven approaches for students, with prompts. ArXiv.

Nguyen, T., Cao, L., Nguyen, P., Tran, V., & Nguyen P. (2023). Capabilities, benefits, and role of ChatGPT in chemistry teaching and learning in Vietnamese high schools. EdArXiv.

Nolan, B. (2023, Jan. 30). Here are the schools and colleges that have banned the use of ChatGPT over plagiarism and misinformation fears. Business Insider.

Piercey, J. (2020, July 9). Does remote instruction make cheating easier? UC San Diego Today.

Sapling AI Content Detector. (n.d.).

Tate, T. P., Doroudi, S., Ritchie, D., Xu, Y., & Uci, M. W. (2023, January 10). Educational research and AI-generated writing: Confronting the coming tsunami. EdArXiv.

Simmons, A. (2018, April 27). Why students cheat – and what to do about it. Edutopia.

Sinha, T., & Kapur, M. (2021). When problem solving followed by instruction works: Evidence for productive failure. Review of Educational Research, 91(5), 761-798. 

Trust, T., Whalen, J., & Mouza, C. (2023). ChatGPT: Challenges, opportunities, and implications for teacher education. Contemporary Issues in Technology and Teacher Education, 23(1), 1-23.

University of Massachusetts Amherst. (2023). Required syllabi statements for courses submitted for approval.

Weiser, B. & Schweber, N. (2023, June 8). The ChatGPT lawyer explains himself. The New York Times.

Post Views: 140