Academic Integrity at WCU: Key Insights from a Fall 2025 Student Survey

Written by Gabriel Claros and Clemmy Brophy,
CFC Student Assistants

Academic integrity plays a central role in shaping the learning environment and the long-term value of a degree from Western Carolina University. To better understand how students perceive academic integrity and where additional support may be needed, 58 students participated in an anonymous survey during the 14th Annual Recalibrate Your Compass event in Fall 2025.  

Their responses reveal a campus community that overwhelmingly values honesty yet still faces challenges in navigating expectations and academic pressure. This data suggests that while students deeply value academic honesty, they still benefit from clear expectations and faculty support to consistently uphold it. 

The survey results provide valuable insight into how students experience academic integrity on a practical, day-to-day level. One of the strongest findings is that students (N=58) feel comfortable communicating with faculty about academic integrity. Almost all respondents (96%, n=56) reported that they feel very or somewhat comfortable asking professors questions about what constitutes cheating or plagiarism. This indicates that many faculty members are already fostering an approachable and supportive atmosphere. Only a small number of students (n=2) expressed discomfort, further suggesting communicating clear and consistent expectations across courses would benefit the entire student body. 

CFC Student Assistants, Gabriel and Clemmy, sitting behind a table during an event.

Gabriel (pictured on the left) and Clemmy (pictured on the right) are student assistants at the CFC, both are in their first semester at Western. Gabriel is majoring in Mechanical Engineering and Clemmy is majoring in Marketing.  

Students also expressed a strong sense of understanding regarding academic integrity policies. A remarkable 98% (n=57) agreed that they have a good understanding of what constitutes a violation of an academic integrity at WCU. Similarly, over three quarters of the students (79%, n=46) feel very confident about their professors’ expectations around generative artificial intelligence (GenAI) in coursework. This highlights the benefits of having ongoing conversations in the classroom, ensuring that expectations are transparent and aligned with departmental or course objectives. 

Perhaps the most striking takeaway is how deeply students value integrity itself. Every participant stated that acting with academic integrity is important to them. When asked how integrity affects the value of their degree, students used words like value, earned, legitimate, and ownership to describe why honest work matters. Their responses reflect a strong internal motivation to do work that they can be proud of, reinforcing the idea that academic integrity is not merely a policy, but a personal commitment shared across campus. 

Students also reported positive observations of integrity in their peers. 74% (n=43) have “observed a peer at WCU practice academic integrity,” and 86% (n=50) believe that most WCU students act with integrity most of the time. These perceptions matter because they shape the culture of the institution: when students believe their peers are committed to honest work, they are more likely to hold themselves to the same standard (Tatum & Schwartz, 2017). 

Despite this overwhelmingly positive outlook, the survey also revealed challenges that commonly interfere with students’ ability to uphold academic integrity. Students were asked what the biggest challenges are that stand in the way of maintaining academic integrity. For this question, students could select multiple answers. 

Challenges students face in maintaining academic integrity

Bar chart showing challenges students face in maintaining academic integrity (multiple answers).

The challenge students selected the most was time pressure (n=43). Heavy workloads, overlapping deadlines, and personal responsibilities can lead students to feel rushed or overwhelmed, increasing the temptation to cut corners. Lack of confidence in their own work (n=25), unclear expectations (n=19), and the temptation to seek unauthorized help (n=16) were the three other challenges students selected in the survey. These challenges suggest that violations of academic integrity are often less about intent and more about time-crunches, stress, uncertainty, or feeling unprepared.

The survey also exposed a few knowledge gaps. While most students demonstrated a solid understanding of academic integrity, only 60% (n=35) correctly identified fabrication as the act of creating or falsifying information. The remaining students confused this term with plagiarism (n=16), self-plagiarism (n=5), or facilitation (n=2), highlighting the need for continued education on the distinctions between these concepts. 

Overall, the survey results suggest a campus that is committed to integrity but still navigating the pressures and complexities of modern academic life. Faculty can play a key role by clarifying expectations, especially around ethical GenAI use, and how to properly cite sources, and by recognizing the impact of external factors on student decision-making. Meanwhile, students can continue to contribute to a culture of honesty by asking questions, seeking support when needed, and reflecting on the long-term value of earning their degree with integrity.

Reference 

Tatum, H., & Schwartz, B. M. (2017). Honor codes: Evidence based strategies for improving academic integrity. Theory into Practice56(2), 129–135. https://doi.org/10.1080/00405841.2017.1308175 

We Recommend: Talk is cheap: why structural assessment changes are needed for a time of GenAI

Rethinking Assignment Design in the Age of GenAI

January 2026

Recommended by April Tallant, Director  

Generative AI is reshaping higher education, and our assessment practices must evolve to keep pace. Many institutions have introduced frameworks like traffic light systems, AI use scales, and mandatory declarations. These are helpful first steps because they give us language and structure while we find our bearings. But as Corbin, Dawson, and Liu (2025) argue in Talk is Cheap: Why Structural Assessment Changes Are Needed for a Time of GenAI, these approaches are limited; they rely on student compliance with unenforceable rules. The authors call these approaches discursive changes, or modifications that work through instructions without altering the tasks. Discursive changes alter the communication about the assignment, not the assessment itself. A simple example of discursive change is adding ‘GenAI use is not permitted in this assessment’ to existing assessment instructions. 

The authors argue that discursive changes to assessments are well-intentioned but flawed because they assume students understand ambiguous rules and will comply even when non-compliance is advantageous. Discursive changes also work on the assumption that compliance can be verified, but current AI-detection tools are limited. The authors state, “current detection tools are fraught with false positives and negatives, creating uncertainty and mistrust rather than clarity and accountability” (p. 1092). 

By contrast, Corbin, Dawson, and Liu argue that structural changes, “create assessment environments where the desired behavior emerges naturally from the assessment design” (p. 1093). In other words, structural changes modify the tasks, not the instructions. An example of a structural change provided by the authors include adding a “checkpoint in live assessment requiring tutor signoff on lab work.” Structural changes focus on the process, not the outcome. One example the authors offer: Rather than a final essay, students might participate in live discussions about their idea development and how their thinking developed based on feedback. Another structural change example includes designing assessments that connect throughout the term. Students build on their earlier work, demonstrating their learning across touchpoints, not from one task alone. 

The authors conclude that long-term solutions require rethinking assessment design so that validity is built into the structure, not just explained in instructions. The challenge of assessment design continues as GenAI advances. Our time as educators is better spent on structural redesign of assessment to ensure assessment validity that demonstrates student capabilities. 

Action item 1:

Have you modified your assessments with a structural approach? We’d love to hear from you! Join us for the AI Forum on Tuesday, Jan 27, 3:30 – 5:00 pm either in person or on Zoom to share your experience. 

Action item 2: 

After reading the article, consider the following questions:  

    • The article suggests shifting from product-focused to process-focused assessment. What “authenticated checkpoints” could you realistically build into your lessons or modules to capture a student’s developmental process? 
    • Think of an assessment you believe works well. What about the task itself encourages the kind of learning you want?
    • What are barriers to making structural changes to assignments? How can we overcome them?
  1. Do you want to chat about this article? Send me an email (atallant@wcu.edu) and I’ll stop by your office or meet you on Zoom. 

Action item 3:

Consider registering for CFC’s assignment re-design workshop in February! 

Corbin, T., Dawson, P., & Liu, D. (2025). Talk is cheap: why structural assessment changes are needed for a time of GenAI. Assessment and Evaluation in Higher Education50(7), 1087–1097. https://doi.org/10.1080/02602938.2025.2503964 

A group of people gather around a table covered with papers, collaborating on a project.

Photo credit: Canva Pro; Monkey Business Images.

Things instructors should know about the WaLC’s AI policy

If you teach at Western, you might be wondering how peer educators at the WaLC (Writing and Learning Commons) navigate conversations around AI with students seeking support. The WaLC team would like instructors to know that an internal AI policy exists, and that it was developed to be as faculty forward as possible and while the policy allows for some flexibility, it’s conservative in its application.  

By default, peer educators ask students who come to the WaLC what their instructor has written in their syllabus (or assignment instructions) concerning the course’s AI policy. If there is no policy to be found, peer educators won’t encourage students to utilize AI in any way. If there is a statement that allows for AI use, peer educators can assist “in areas provided on the syllabus, or if not stated, in areas such as brainstorming, gathering information, interpreting feedback, outlining, and quizzing” (see AI Decision Tree, developed by the WaLC). 

Additionally, the WaLC team has open conversations with their own peer educators who feel uncomfortable using AI themselves (see Step 1 of the AI decision tree). If they encounter a student whose professor requires their students to use AI and they neither are familiar nor comfortable using AI, they can refer the student to a peer educator who is.  

Decision Tree if peer educators should use AI.

One recommendation both the WaLC and the CFC have is for instructors to have a conversation with students of what is to be considered acceptable AI use in addition to having a course specific AI policy in the syllabus. For example, some students are still unsure if they can use certain software (Grammarly is a common example) in all of their course work, only for some activities, or not at all.  

The WaLC’s goal is to empower students to achieve their academic goals. To that end, the WaLC’s peer educators have come up with some creative ways in which they have utilized AI to help students. In one case, a recreational therapy major struggled using person-first language when discussing a patient’s disability. With the help of AI, they curated a vocabulary list consisting of words and phrases that helped the student expand their repertoire and become a more confident writer. Another example includes a student who couldn’t make sense of their instructor’s track changes and suggestions on their first draft since there weren’t any additional comments provided. They asked AI why the paper was marked up the way it was and concluded that a lot of their writing had been repetitive.

Sometimes, students do come into the WaLC with AI-generated writing. Peer educators are trained to approach those conversations similarly to the conversations about suspected plagiarism. These conversations all begin by asking open-ended, non-judgmental questions: “Do you know how to cite?” or “Did you forget to add attribution here?” to detect if plagiarism they noticed occurred unintentionally due to a lack of knowledge. Likewise, when a peer educator suspects AI use, they also ask probing questions to find out if and how AI was used by the student. Peer educators will always advise students to adhere to the instructor’s policy, and they remind students if they notice when AI was used, their professors will notice too.  

If you have questions about the WaLC’s AI policy, feel free to reach out to Haylee Melton at wilkieh@wcu.edu; if you would like support on how to approach your own AI policy, reach out to Coulter Faculty Commons at cfc@wcu.edu.

NEW: AI Ethics Course Module

The CFC is launching a module instructors can import into their Canvas course. The goal of this module is to help students learn the importance of using AI ethically in their college studies. It is a self-contained module that is intended to be customized by each instructor as desired to fit their teaching needs. Instructions are included in non-published pages of the module.  

If you are interested in piloting the module, import the “AI Course Ethics Module” from Canvas Commons. For step-by-step instructions, review Importing a Resource from Canvas Commons. 

The CFC would like to extend its thanks to Haylee Melton,
Associate Director of the Writing and Learning Commons, in collaborating on this article.