Exploring learner perceptions of and interaction behaviors using the Research Writing Tutor for research article Introduction section draft analysis
Is Version Of
The swiftly escalating popularity of automated writing evaluation (AWE) software in recent years has compelled much study into its potential for effective pedagogical use (Chen & Cheng, 2008; Cotos, 2011; Warschauer & Ware, 2006). Research on the effectiveness of AWE tools has concentrated primarily on determining learners' achieved output (Warschauer & Ware, 2006) and emphasized the attainment of linguistic goals (Escudier et al., 2011); however, in-process investigations of users' interactions with and perceptions of AWE tools remain sparse (Shute, 2008; Ware, 2011). This dissertation employed a mixed-methods approach to investigate how 11 graduate student language learners interacted with and perceived the Research Writing Tutor (RWT), a web-based AWE tool which provides discourse-oriented, discipline-specific feedback on users' section drafts of empirical research papers. A variety of data was collected and analyzed to capture a multidimensional depiction of learners' first time interactions with the RWT; data comprised learners' pre-task demographic survey responses, screen recordings of students' interactions with the RWT, individual users' interactional reports archived in the RWT database, instructor and researcher observations of students' in-class RWT interactions, stimulated recall transcripts, and post-task survey responses. Descriptive statistics of the Likert-scale response data were calculated, and open-ended survey responses and stimulated recall transcripts were analyzed using open coding discourse analysis techniques or Systemic Functional Linguistic (SFL) appreciation resource analysis (Martin & Rose, 2003), prior to triangulating data for certain research questions. Results showed that participants found the RWT to be useful and were positive in their attitudes about helpfulness of the tool in the future if issues in feedback accuracy were improved. However, the participants' also cited wavering trust in the RWT and its automated feedback, seemingly originating from learners' observations of RWT feedback inaccuracies. Systematized observations of learners' actual and reported RWT interaction behaviors showed both unique and patterned behaviors and strategies for using the RWT for draft revision. The participants' cited learner variables, such as technological background and comfort levels using computers, personality, status as a non-native speaker of English, discipline of study, and preferences for certain forms of feedback, as impacting their experience with the RWT. Findings from this research may help enlighten potential pedagogical uses of AWE programs in the university writing classroom as well as help inform the design of AWE tasks and tools to facilitate individualized learning experiences for enhanced writing development.