
It all comes down to trust: Rethinking the professional responsibilities of educators in the age of AI
Educators have professional responsibilities towards their students, including to foster confidence in the education they are receiving and to prepare them for active participation in society. However, more than three years after the public release of ChatGPT and other LLMs, educators are still struggling to replace time-tested, evidence-based tools to cultivate and evaluate learning. In fact, despite extensive trial and error, research, and institutional supports, it still remains almost impossible for educators to know for certain if they are evaluating their own students, or Claude. Students and educators are disengaging, and the public is losing trust in the power of education to transform lives, communities, and society. In such times of uncertainty and flux, the author proposes that one strategy for educators to uphold their professional responsibilities is to take a step back, and invest in trust.

By Meaghan J. Girard, C. Tr.
The public release of ChatGPT in November 2022 was a shock to education systems the world over. Overnight, students were given free, unfettered access to technology whose very affordances serve to undermine thinking[1], and educators were tasked with reviewing their courses accordingly, with minimal guidance or resources. These are trying times in higher education—in particular for those who form the beating heart of the institutions: educators and students.
In the shadows, both students and educators are silently accumulating negative experiences in their relationships with one another and the education system. Students have had to navigate disparate policies governing AI use and have been accused of AI use, even when no AI was used or was used in good faith, resulting in an uneven education experience. As for educators, they have been reviewing, in substantive, ongoing ways, their courses and evaluation mechanisms in an effort to prevent (or catch) “cheating” and ensure that students continue to learn, only to end up grading increasingly homogenous assignments with flawless grammar. Stress levels, exhaustion, and a sense of disillusionment runs high.
The greatest casualty of this slice of history is not the questioning of the reliability and trust worthiness of evaluation outcomes —if anything, ChatGPT triggered an important discussion regarding the form, function, and benefits of evaluation. Rather, an even more detrimental consequence is this loss of trust in the classroom.
How trust reframes the professional responsibilities of educators
Mutual trust between educators and students is a major driver of student achievement, across socioeconomic contexts. Indeed, despite the prevalent metaphor of educators pouring their knowledge into eager minds, learning is a profoundly social and interactive endeavour, one that requires educators to socialize students and for students to engage in genuine risk-taking to become active participants in learning.[2]
To cultivate and maintain that trust, in turn, requires accountability between one another. It requires openness, so that people make themselves vulnerable to a certain extent (e.g., when students take risks in an assignment) and share control (e.g., when educators allow students to lead discussions). And it requires reliability with respect to the rules of the game (e.g., clear rules and less reliance on “gotchas”). In turn, when trust erodes, students shift from engagement to self-protection and disengagement.[3]
Prioritizing trust over quantitative evaluation outcomes not only helps reframe the professional responsibilities of educators, but it also injects a bit more breathing space into their work. So what can it look like to create classrooms that promote trust, above all else?
Reliability
Educators are responsible for creating clear guidelines for students to navigate in their learning, and they can do this by adopting a clear AI policy. And yet, one may have developed the most iron-clad AI policy in existence, it remains that it is a truth universally acknowledged that students stuck between a rock and a hard place will cheat—especially when they have the ultimate cheating tool at their fingertips. Rather than trying to insert little “gotchas” in the assignments, educators can emphasize low-stakes assignments whenever possible. Even better if these assignments do not require grading, so that educators do not feel “resentful” about evaluating Claude. Low-stakes assignments also facilitate risk-taking—another critical component of learning.
A word on AI detectors: These are notoriously unreliable, with a right rate of false positives for second-language speakers. Anecdotal evidence abounds of students being accused of “not being smart enough” to produce a text that they invested themselves in, and of students avoiding using words and, famously and regrettably, em-dashes, in order to protect themselves. There is no tool that can so quickly undermine the necessary conditions for trust than to use an AI detector.
Accountability
Educators can extend that relationship of accountability beyond the teacher-student dyad. By emphasizing the learning community and making students accountable to each other, students experience a certain degree of healthy social pressure to “show up” for others and justify their work. So those low-stakes assignments that the educator did not “grade”? The students can deepen their thought processes and identify strengths and weaknesses together.
Openness
Although it is never comfortable being in front of a classroom and admitting not knowing something, AI remains a black box. Admitting “I don’t know if it’s worthwhile using ChatGPT for this task. Let’s find out together” not only reduces the pressure on educators, but it also engages students in a shared project of learning. What’s more, it can turn the classroom into a space where students and educators share the ways they have been using AI (or not), and discuss together if these uses respect academic integrity or undermines learning.
It’s also important to be open to the different teaching and evaluation approaches adopted by your colleagues (e.g., from banning to embracing AI use in general translation courses). People learn in different ways; as such, students benefit when they are exposed to a diversity of learning and evaluation styles.
Trust is good pedagogy
Prioritizing trust does not imply throwing evaluations out the window. Rather, similarly to principles of backward design in course planning, one can decide to prioritize trust, and then use this principle to determine evaluation mechanisms. When trust is the guiding principle, it can help educators to conceive of learning evaluations in ways to protect that trust.
It also works on student motivation[4]: When integrating AI with reliability, openness, and honesty, students become more motivated to “play the game” and become active and accountable participants in their learning, and they are moreover empowered with the perceived sense of confidence to do so.
And finally, adopting trust as a guiding principle brings the joy back to teaching… and learning.
Meaghan J. Girard is an organizational ethnographer and ethicist whose research focuses on how AI and hype change the way we work, learn, and organize in professional and academic settings. She worked as a professional translator for more than ten years, sits on OTTIAQ’s board of directors, and has recently joined Concordia University’s Department of French Studies as faculty lecturer.
[1]Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., … & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arXiv preprint arXiv:2506.08872, 4.
[2]Tschannen-Moran, M. (2017). Trust in education. In Oxford research encyclopedia of education.
[3] Ibid.
[4]Viau, R. (2014). Savoir motiver les étudiants. Se former à la pédagogie de l’enseignement supérieur, 235-254.



