Abstract
The study examines the process of assessing vocabulary in oral proficiency examinations. Vocabulary is increasingly adopted as an effective indicator of candidates' oral proficiency in large-scale tests, but there is limited empirical evidence so far regarding how raters assess it. In this experiment, 25 participants rated one English oral text produced by a candidate with Chinese as a first language. Raters' verbal protocols were transcribed and coded to identify what raters attended to in assessing vocabulary. The candidate's use of 'advanced' words was found to have a direct impact on vocabulary scores. Also, both vocabulary and non-vocabulary features emerged in the raters' protocols. The findings question the possibility of assessing vocabulary as a discrete construct.
| Original language | English |
|---|---|
| Pages (from-to) | 1-13 |
| Number of pages | 13 |
| Journal | System |
| Volume | 46 |
| Early online date | 23 Jul 2014 |
| DOIs | |
| Publication status | Published - 31 Oct 2014 |
Keywords
- rater protocols
- vocabulary assessment
- analytic rating scales
- oral proficiency
Fingerprint
Dive into the research topics of 'Investigating how vocabulary is assessed in a narrative task through raters' verbal protocols'. Together they form a unique fingerprint.Profiles
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver