Recognizing Possible Use of Generative AI 

Establishing clear expectations for generative artificial intelligence (GAI) in your course is the first step in supporting student learning. Once you have established how students may or may not use GAI in your course, consider some of the ways you can ensure these expectations are being met. This resource can help you interpret student work and identify patterns that may indicate GAI use. 

It can be helpful to look for patterns in student work rather than relying on one specific signal. The indicators below are grouped by strength and should be considered in relation to the course context, the assignment, and the student’s prior work. 

Strong Indicators of Possible Use 

These features, if present, can strongly suggest that a submission was generated or heavily assisted by GAI. 

Explicit AI-Related Language 

Some submissions contain phrasing that reflects how GAI tools “talk” when responding to prompts. This may include language that positions the writer as an assistant rather than a student, such as: 

  • Statements that distance the writer from the work (e.g., suggesting the writer conduct the experiment or write the paper) 
  • References to being an AI or lacking human experience or emotion 

Sources and Citation Issues 

GAI tools are prone to fabricating or misusing sources. Concerning patterns may include: 

  • Citations to research, books, or authors that do not exist 
  • Use of real sources that are unrelated to the course, assignment, or discipline 

Strong Similarity to GenAI Output 

A submission may stand out if it closely mirrors GAI-generated text while differing noticeably from peers’ work. This can include: 

  • Unusual formatting, phrasing, or organizational choices 
  • A combination of errors, sources, or stylistic features that align with GAI output rather than course expectations or norms 

Moderate Indicators of Possible Use 

These features may raise concern but are generally not sufficient on their own to determine whether GAI was used.  

Factual Inaccuracies

 Because GAI tools generate text based on patterns rather than understanding, they may present incorrect or invented information confidently. 

Generic or Vague Content 

GAI-produced writing often lacks specificity. This can show up as: 

  • Broad, surface-level discussion 
  • Minimal engagement with course readings, lectures, or disciplinary concepts 
  • Language that sounds polished but says very little 

Failure to Address the Prompt 

Some submissions may appear well written yet do not meaningfully respond to the assignment question or task, suggesting a mismatch between the prompt and the generated output. 

Uncharacteristic Language Patterns 

Some possible indicators include: 

  • Repeated phrases or structures 
  • Inconsistent tone or word choice 
  • A level of vocabulary, grammar, or organization that is notably different from the student’s prior work  

When considering language-related indicators, instructors should take care not to conflate GAI use with linguistic differences or legitimate use of grammar support tools. 

Other Considerations  

Whenever possible, concerns about GenAI misuse should be grounded in comparison with the student’s own prior work. 

When concerns arise about suspected unauthorized GAI use, instructors can invite the student to discuss their decision-making and their writing process in completing the assignment. Depending on the course context, it may also be appropriate to request drafts, notes, or reflective process statements if these elements were not built into the assessment design.  

Generative Artificial Intelligence (GAI) Content Detectors 

Flags from approved detection tools may be considered as part of a broader review but should not be treated as definitive evidence on their own. 

Memorial’s statement on GAI provides guidance on the use of GAI detection tools as evidence to levy academic misconduct allegations. The use of GAI detection tools is not recommended, and their results cannot be the sole basis for an academic misconduct allegation. While some tools may be more accurate than others in their analysis, the results should not be considered definitive.  

Challenges with GAI Detectors 

  • False Positives: GAI detection tools can incorrectly flag text as AI-generated. This is especially common for writers who use English as an additional language or employ unconventional styles. Overreliance on these tools can unfairly penalize students. 
  • Copyright: Some detection platforms store submitted text for future analysis, which may inadvertently expose students’ original work or cited sources. Instructors should consider intellectual property and copyright implications before uploading student submissions. 

Resources

Resource created by: Melanie D., Gil S. & Ruth H.

Originally Published: January 28, 2026