Designing, implementing, and evaluating an automated writing evaluation tool for improving EFL graduate students’ abstract writing: a case in Taiwan
Writing English research article (RA) abstracts is a difficult but mandatory task for Taiwanese engineering graduate students (Feng, 2013). Understanding the current situation and needs of Taiwanese engineering graduate students, this dissertation aimed to develop and evaluate an automated writing evaluation (AWE) tool to assist their research article (RA) abstract writing in English by following a Design-Based Research (DBR) approach as the methodological framework. DBR was chosen because it strives to solve real-world problems through multiple iterations of development and building on results from each iteration to advance the project.
Six design iterations were undertaken to develop and to evaluate the AWE tool in this dissertation, including (1) corpus compilation of engineering RAs, (2) genre analysis of engineering abstracts, (3) machine learning of move classification in abstracts, (4) analysis of lexical bundles used to express moves, (5) analysis of the choice of verb categories associated with moves, and finally, (6) AWE tool development based on previous findings, classroom implementation, and evaluation of the AWE tool following Chapelle’s (2001) computer-assisted language learning (CALL) framework.
To begin with, I collected a corpus of 480 engineering RAs (Corpus-480) to extract appropriate linguistic properties as pedagogical materials to be implemented in the AWE tool. A sub-corpus (Corpus-72) was compiled with 72 RAs randomly chosen from Corpus-480 for manual and automated analyses. Next, to seek the best descriptive framework for the structure of engineering RA abstracts, two move schemata were compared: (1) IMRD (Introduction, Methodology, Results, and Discussion) and (2) CARS (Create-A-Research-Space, Swales, 1990). Abstracts in Corpus-72 were annotated and these two schemas were evaluated according to three quantitative metrics devised specifically for this comparison.
Applying a statistical natural language processing (StatNLP) approach, a Support Vector Machine (SVM) was trained for automated move classification in abstracts. Formulaic language in engineering RA sections was used as linguistic features to automatically classify moves in abstracts. Additionally, four-word lexical bundles and verb categories were identified from Corpus-480 and Corpus-72, respectively. Four-word lexical bundles associated with moves in abstracts were extracted automatically. Additionally, verb categories (i.e., tense, aspect, and voice) in moves of abstracts were identified using CyWrite::Analyzer, a hybrid (statistical and rule-based) NLP software.
Finally, the AWE tool was developed, based on the findings from the previous iterations, and implemented in an English-as-a-foreign-language (EFL) classroom setting. Through analyzing students’ drafts before and after using the tool, and responses to a questionnaire and a semi-structured interview, the AWE tool was evaluated based on Chapelle’s (2001) CALL evaluation framework. The findings showed that students attempted to improve their abstracts by adding, deleting, or changing the sequences of their sentences, lexical bundles, and verb categories in their abstracts. Their attitudes toward the effectiveness and appropriateness of the tool were quite positive. Overall, the AWE tool drew students’ attention to the use of lexical bundles and verb categories to achieve the communicative purposes of each move in their abstracts.
In conclusion, this dissertation started from Taiwanese engineering students’ needs to improve their English abstract writing, and attempted to develop and evaluate an AWE tool for assisting them. Following DBR, the findings from this dissertation are discussed to improve the next generation of the AWE tools. Having these iterations in place, future studies can focus on developing pedagogical materials from genre-based analysis in different disciplines to fulfill learners’ needs.