News

LAU’s Inaugural PodChat Session Addresses AI Detection in the Classroom

The School of Arts and Sciences kicks off its podcast series on academic challenges with a focus on the questions surrounding AI detection tools and their broader impact on student learning.

By Luther J. Kanso

From left: Ms. Dabaghi, Dr. Baroudi, Dr. Rizk, Dr. Azzi, Dr. Wex, Ms. Akiki.
This podcast-style session examined the effectiveness, limitations and challenges of adapting teaching methods and assessment strategies in academic environments.

With the rapid expansion of AI software, universities worldwide have begun to wrestle with the implications of AI-generated work in academia. To what extent that kind of work is detectable remains a contentious topic.

To address growing concerns on AI detection in education, LAU’s School of Arts and Sciences, in collaboration with the School Undergraduate Academic Council (SUAC) and Center for Innovative Learning (CIL), organized an online PodChat session on March 18, titled On AI Detection Tools.

This podcast-style session gathered faculty members to examine the effectiveness, limitations and challenges of adapting teaching methods and assessment strategies in an academic environment where AI use, whether permitted or concealed, is becoming the norm.

“Our initiative comes as part of our commitment to supporting faculty professional development,” said Lecturer and Honors Program Coordinator Reine Azzi, who hosted the podcast.

Throughout the session, faculty across departments, such as English and Creative Arts, Political and International Studies, Biological Sciences, Physical Sciences and Liberal Studies engaged in a thorough conversation on how AI is not only influencing writing-based practices but also impacting science disciplines, where research, data analysis and visualization tools are affected.

Citing episode 555 of the Teaching in Higher Ed podcast, A Big Picture Look at AI Detection Tools with Assistant Teaching Professor at the University of Colorado Boulder’s Division of Continuing Education Christopher Ostro, LAU faculty discussed the reliability of AI detection tools like Turnitin in accurately identifying AI-generated content and distinguishing between human and machine-written text.

Senior Instructor Evelyn Dabaghi, for example, recounted experiences with Turnitin’s AI detection feature, which often misidentified well-written student work as AI-generated while failing to flag work that had clearly been influenced by AI.

Dabaghi added that this issue can be further compounded by the fact that, to many LAU students, English is not their first language, and their linguistic choices may diverge from the patterns these detection tools expect. “This raises questions about bias in AI detection when used to assess diverse student populations,” she said.

However, AI extends beyond the humanities, with faculty from the sciences emphasizing its ever-growing role in research. Associate Professor Brigitte Wex, who specializes in computational chemistry, remarked that tools aiding literature reviews and generating scientific content are becoming more sophisticated.

“It just raises questions about authorship and intellectual responsibility,” she said. “The use of AI in scientific research sometimes involves collaboration between the researcher and the tool, which makes it more difficult to establish clear boundaries.”

While there is value in using detection tools—particularly in online courses where oversight of students’ work tends to be limited—these often fail to capture the nuances well-versed students sometimes add to their AI-generated content.

According to Associate Dean and Professor Sandra Rizk, given that machine-generated material is becoming more integrated into all kinds of fields, the question arises as to whether it should be altogether treated differently in the context of student assignments.

To that end, the discussion touched on the practical implications of AI detection tools, particularly on how they impact students’ learning experiences in the classroom.

Instructor and Writing Center Coordinator Maya Akiki emphasized that while the reliance on detection tools might shift the focus from teaching students responsible engagement with AI to simply policing their work, it is important to recognize that their use of AI is not always intended to deceive the system.

“Many students turn to AI tools to help with language barriers, overcome writer’s block or refine their arguments,” she noted, suggesting that a punitive approach to AI detection may not address the issue at hand. Instead, she added, “educators must rethink how AI can be integrated into the learning process in a way that supports student development.”

The faculty agreed that banning AI is neither practical nor desirable. In fact, a balanced approach is advised to underscore critical thinking, ethical use and personal accountability.

Many of the faculty found success in asking students to document their writing and research process with when and how AI was used. This transparency, noted Assistant Dean and Professor Sami Baroudi, shifts the focus from detection to education and encourages students to engage with AI as a tool rather than a shortcut.

Another pragmatic approach would be to develop institution-specific policies that account for the unique needs of LAU’s student body, suggested Dr. Azzi. There was a consensus that rather than relying solely on third-party AI detection software, LAU could explore the possibility of developing its own detection tools tailored to the linguistic and academic context of its students.

For Dabaghi, the most empirical means of handling this dilemma, however, would be to rely on the relationship between the teacher and the student, as it is first and foremost a human experience that cannot be replicated by AI.

“I know my students, and I observe them first-hand, so I’m more in tune with their thinking and analytical processes as well as their attitude to learning,” she said. “This connection we as instructors establish with our students trumps any technological advancement in detecting or monitoring their work.”

The session led to the conclusion that AI detection is not merely a technical matter but a broader issue that intersects with pedagogy and ethics—one that requires both deliberate consideration and decision-making.