Reimagining Education | College of Education & Human Development | Oral Presentation Student Center East - Room 203
Feb 05, 2025 03:15 PM - 04:00 PM(America/New_York)
20250205T1515 20250205T1600 America/New_York Session F: Reimagining Education Student Center East - Room 203 3rd Annual Graduate Conference for Research, Scholarship, and Creative Activity grad@gsu.edu
5 attendees saved this session
Reevaluating Generative AI in English Language Education: A Critical Examination of Pedagogical Claims and Classroom Applications in the real worldView Abstract
03:15 PM - 03:30 PM (America/New_York) 2025/02/05 20:15:00 UTC - 2025/02/05 20:30:00 UTC
In the field of teaching English as a second language, generative AI systems are increasingly being positioned as a “solution” to pedagogical problems. While these systems claim to increase the efficiency of teaching, their potential to resolve the core dilemmas of ESL educators, to facilitate meaningful peer interaction and support mixed-level classes, remains unclear. The study employed an environmental scan to catalog generative AI tools designed for English language teaching and qualitative thematic analysis of claims about their benefits made in selected promotional materials. These claims were compared with existing studies of dilemmas in language teaching. Preliminary results suggest that while these systems prioritize efficiency and customizability in content generation, they often rely on generalized templates and outputs, which may fail to accommodate the linguistic diversity and unique contexts of classrooms. For instance, while certain online platforms and tools can generate teaching materials based on instructors’ needs, it is questionable whether they can resolve issues arising from the students of different proficiency levels and linguistic backgrounds. Moreover, this reliance on AI-generated materials risks promoting generic content and reinforces a standardized and franchised approach to language education. Accordingly, critical AI literacy among language teachers is essential to mitigate the risks associated with overusing formulaic, AI-generated materials as a replacement for teachers’ expertise. Generative AI systems should also be adapted to the domain specific challenges of language teaching. This study challenges educators to reconsider the role of AI as a partner in addressing-not bypassing-complex teaching challenges.
Presenters
HK
Hyunhwa Kim
Georgia State University
Assessing Impacts of Training Mode and Rater Experience on Assessment of L2 WritingView Abstract
03:30 PM - 03:45 PM (America/New_York) 2025/02/05 20:30:00 UTC - 2025/02/05 20:45:00 UTC
This study examines the effectiveness of three rater training modalities for assessing L2 writing: 1) asynchronous video training, 2) synchronous discussion and negotiation, and 3) a hybrid approach combining both. The research explores how these methods impact interrater reliability, score consistency, and accuracy in assessing essays written by students in Japanese EFL secondary classes. Employing a mixed-methods approach, this study utilizes quantitative measures and semi-structured interviews. We also explore the effect that teaching experience has on raters’ assessments. Six Japanese secondary English teachers (experienced raters) and six Japanese university students (novice raters) assessed 40 writing samples from Japanese students using a 3-point holistic and analytic scale across three stages: pre-training (10 essays), during training (20 essays), and post-training (10 essays). Raters were divided into three training groups: asynchronous video training, synchronous discussion and negotiation, and hybrid. We utilized Krippendorff’s alpha to evaluate inter-rater reliability and repeated measures ANOVA to analyze score differences and accuracy across training methods and experience levels. Our findings suggest that hybrid training resulted in higher rater reliability, and less experienced raters achieved higher gains in scoring accuracy. Interview data revealed that both teachers and students appreciated receiving benchmark scores in advance which facilitated more practical discussions rather than simply a reliance on instinct or experience. However, feedback highlighted limitations in the video training, particularly raters’ inability to address specific concerns and interpretations of rubric descriptions. Findings highlight the need to revise rater training programs to optimize educational assessment outcomes by incorporating more personalized and interactive training methods.
Presenters Chiho Young-Johnson
Georgia State University
AV
Anton Vegel
Georgia State University
No moderator for this session!
No attendee has checked-in to this session!
Upcoming Sessions
78 visits