Abstract
Although academics are increasingly expected to undertake studies of their practice, particularly where this involves the use of learning technology, experience to date suggests that meeting this expectation has proved difficult. This paper attempts to explain this difficulty. After reviewing literature that provides a rationale for practitioner evaluation, the experiences of three projects (EFFECTS, ASTER and SoURCE) which attempted to draw on this process are described. Three main areas of difficulty are discussed: the skills and motivations of the academics involved, and the kinds of evidence (and its analysis) that 'count' for a given evaluation. This discussion leads to the identification of a number of problems that inhibit practitioner evaluation, including ambiguity in the nature and purpose of evaluation, and a general feeling that the function of evaluation has already been served through existing quality mechanisms. Finally, the possible implications are considered of some or all of the steps in the evaluation process being undertaken by an evaluator working alongside the academic.
Original language | English |
---|---|
Pages (from-to) | 3-10 |
Number of pages | 7 |
Journal | Journal of Educational Technology Society |
Volume | 5 |
Issue number | 3 |
Publication status | Published - 2002 |
Keywords
- evaluation
- learning technology
- academic roles
- participant evaluation