Modes of feedback in ESL writing: Implications of shifting from text to screencast
Is Version Of
For second language writing (SLW) instructors, decisions regarding technology-mediated feedback are particularly complex as they must also navigate student language proficiency, which may vary across different areas such as reading or listening. Yet technology-mediated feedback remains an underexplored realm in SLW especially with regard to how modes of technology affect feedback and how students interact with and understand it. With the expanding pervasiveness of video and increased access to screencasting (screen recording), SLW instructors have ever-growing access to video modes for feedback, yet little research to inform their choices. Further, with video potentially requiring substantial investment from institutions through hosting solutions, a research-informed perspective for adoption is advisable. However, few existing studies address SLW feedback given in the target language (common in ESL) or standalone (rather than supplemental) screencast feedback.
This dissertation begins to expand SLW feedback research and fill this void through three investigations of screencast (video) and text (MS Word comments) feedback in ESL writing. The first paper uses a crossover design to investigate student perceptions and use of screencast feedback over four assignments given to 12 students in an intermediate ESL writing class through a combination of a series of surveys, a group interview and screen recorded observations of students working with the feedback. The second paper argues for appraisal an outgrowth of systemic functional linguistics (SFL) focused on evaluative language and interpersonal meaning, as a framework for understanding interpersonal differences in modes of feedback through an analysis of 16 text and 16 video feedback files from Paper 1. Paper 3 applies a more intricate version of the appraisal framework to the analysis of video and text feedback collected in a similar crossover design from three ESL writing instructors.
Paper 1 demonstrates the added insights offered by recording students’ screens and their spoken interactions and shows that students needed to ask for help and switched to the L1 when working with text feedback but not video. The screencast feedback was found to be easier to understand and use, as MS Word comments were seen as being difficult to connect to the text. While students found both types of feedback to be helpful, they championed video feedback for its efficiency, clarity, ease of use and heightened understanding and would greatly prefer it for future feedback. Successful changes were made at similar rates for both types of feedback.
The results of Paper 2 suggest possible variation between the video and text feedback in reviewer positioning and feedback purpose. Specifically, video seems to position the reviewer as holding only one of many possible perspectives with feedback focused on possibility and suggestion while the text seems to position the reviewer as authority with feedback focused on correctness. The findings suggest that appraisal can aid in the understanding of multimodal feedback and identifying differences between feedback modes.
Building on these findings, Paper 3 shows substantial reduction in negative appreciation of the student text overall and for each instructor individually in video feedback as compared to text. Text feedback showed a higher proportion of negative attitude overall and positioned the instructor as a single authority. Video feedback, on the other hand, preserved student autonomy in its balanced use of praise and criticism, offered suggestion and advice and positioned the instructor as one of many possible opinions. Findings held true in sum and for each instructor individually suggesting that interpersonal considerations varied across modes. This study offers future feedback research a way to consider the interpersonal aspects of feedback across multiple modes and situations. It provides standardization procedures for applying and quantifying appraisal analysis in feedback that allow for comparability across studies. Future work applying the framework to other modes, such as audio, and situations, such as instructor conferences, peer review, or tutoring are encouraged. The study also posits the framework as a tool in instructor reflection and teacher training.
Taken together the three studies deepen our understanding of the impact of our technological choices in the context of feedback. Video feedback seems to be a viable replacement for text feedback as it was found to be at least as effective for revision, while being greatly preferred by students for its ease of use and understanding. With the understanding of how students use feedback in different modes, instructors can better craft feedback and training for their students. For instance, instructors must remember to pause after comments in screencast feedback to allow students time to hit pause or revise. Video was also seen to allow for greater student agency in their work and position instructor feedback as suggestions that the student could act upon. These insights can help instructors choose and employ technology in ways that will best support their pedagogical purposes.