Designing, implementing, and evaluating an automated writing evaluation tool for improving EFL graduate students’ abstract writing: a case in Taiwan

dc.contributor.advisor Carol A. Chapelle
dc.contributor.advisor Evgeny Chukharev-Hudilainen
dc.contributor.author Feng, Hui-Hsien
dc.contributor.department English
dc.date 2018-08-11T15:04:33.000
dc.date.accessioned 2020-06-30T02:59:26Z
dc.date.available 2020-06-30T02:59:26Z
dc.date.copyright Thu Jan 01 00:00:00 UTC 2015
dc.date.embargo 2001-01-01
dc.date.issued 2015-01-01
dc.description.abstract <p>Writing English research article (RA) abstracts is a difficult but mandatory task for Taiwanese engineering graduate students (Feng, 2013). Understanding the current situation and needs of Taiwanese engineering graduate students, this dissertation aimed to develop and evaluate an automated writing evaluation (AWE) tool to assist their research article (RA) abstract writing in English by following a Design-Based Research (DBR) approach as the methodological framework. DBR was chosen because it strives to solve real-world problems through multiple iterations of development and building on results from each iteration to advance the project.</p> <p>Six design iterations were undertaken to develop and to evaluate the AWE tool in this dissertation, including (1) corpus compilation of engineering RAs, (2) genre analysis of engineering abstracts, (3) machine learning of move classification in abstracts, (4) analysis of lexical bundles used to express moves, (5) analysis of the choice of verb categories associated with moves, and finally, (6) AWE tool development based on previous findings, classroom implementation, and evaluation of the AWE tool following Chapelle’s (2001) computer-assisted language learning (CALL) framework.</p> <p>To begin with, I collected a corpus of 480 engineering RAs (Corpus-480) to extract appropriate linguistic properties as pedagogical materials to be implemented in the AWE tool. A sub-corpus (Corpus-72) was compiled with 72 RAs randomly chosen from Corpus-480 for manual and automated analyses. Next, to seek the best descriptive framework for the structure of engineering RA abstracts, two move schemata were compared: (1) IMRD (Introduction, Methodology, Results, and Discussion) and (2) CARS (Create-A-Research-Space, Swales, 1990). Abstracts in Corpus-72 were annotated and these two schemas were evaluated according to three quantitative metrics devised specifically for this comparison.</p> <p>Applying a statistical natural language processing (StatNLP) approach, a Support Vector Machine (SVM) was trained for automated move classification in abstracts. Formulaic language in engineering RA sections was used as linguistic features to automatically classify moves in abstracts. Additionally, four-word lexical bundles and verb categories were identified from Corpus-480 and Corpus-72, respectively. Four-word lexical bundles associated with moves in abstracts were extracted automatically. Additionally, verb categories (i.e., tense, aspect, and voice) in moves of abstracts were identified using CyWrite::Analyzer, a hybrid (statistical and rule-based) NLP software.</p> <p>Finally, the AWE tool was developed, based on the findings from the previous iterations, and implemented in an English-as-a-foreign-language (EFL) classroom setting. Through analyzing students’ drafts before and after using the tool, and responses to a questionnaire and a semi-structured interview, the AWE tool was evaluated based on Chapelle’s (2001) CALL evaluation framework. The findings showed that students attempted to improve their abstracts by adding, deleting, or changing the sequences of their sentences, lexical bundles, and verb categories in their abstracts. Their attitudes toward the effectiveness and appropriateness of the tool were quite positive. Overall, the AWE tool drew students’ attention to the use of lexical bundles and verb categories to achieve the communicative purposes of each move in their abstracts.</p> <p>In conclusion, this dissertation started from Taiwanese engineering students’ needs to improve their English abstract writing, and attempted to develop and evaluate an AWE tool for assisting them. Following DBR, the findings from this dissertation are discussed to improve the next generation of the AWE tools. Having these iterations in place, future studies can focus on developing pedagogical materials from genre-based analysis in different disciplines to fulfill learners’ needs.</p>
dc.format.mimetype application/pdf
dc.identifier archive/lib.dr.iastate.edu/etd/14824/
dc.identifier.articleid 5831
dc.identifier.contextkey 8330867
dc.identifier.doi https://doi.org/10.31274/etd-180810-4410
dc.identifier.s3bucket isulib-bepress-aws-west
dc.identifier.submissionpath etd/14824
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/29009
dc.language.iso en
dc.source.bitstream archive/lib.dr.iastate.edu/etd/14824/Feng_iastate_0097E_15444.pdf|||Fri Jan 14 20:27:16 UTC 2022
dc.subject.disciplines Bilingual, Multilingual, and Multicultural Education
dc.subject.disciplines English Language and Literature
dc.subject.keywords Applied Linguistics and Technology
dc.subject.keywords Abstract writing
dc.subject.keywords Automated Writing Evaluation
dc.subject.keywords Design-Based Research
dc.subject.keywords English for Academic Purposes
dc.subject.keywords Genre Analysis
dc.subject.keywords Natural Language Processing
dc.title Designing, implementing, and evaluating an automated writing evaluation tool for improving EFL graduate students’ abstract writing: a case in Taiwan
dc.type article
dc.type.genre dissertation
dspace.entity.type Publication
relation.isOrgUnitOfPublication a7f2ac65-89b1-4c12-b0c2-b9bb01dd641b
thesis.degree.level dissertation
thesis.degree.name Doctor of Philosophy
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Feng_iastate_0097E_15444.pdf
Size:
2.23 MB
Format:
Adobe Portable Document Format
Description: