Towards effective practitioner evaluation: an exploration of issues relating to skills, motivation and evidence

Jen Harvey, Martin Oliver, Janice Smith

    Research output: Contribution to journalArticle

    6 Citations (Scopus)

    Abstract

    Although academics are increasingly expected to undertake studies of their practice, particularly where this involves the use of learning technology, experience to date suggests that meeting this expectation has proved difficult. This paper attempts to explain this difficulty. After reviewing literature that provides a rationale for practitioner evaluation, the experiences of three projects (EFFECTS, ASTER and SoURCE) which attempted to draw on this process are described. Three main areas of difficulty are discussed: the skills and motivations of the academics involved, and the kinds of evidence (and its analysis) that 'count' for a given evaluation. This discussion leads to the identification of a number of problems that inhibit practitioner evaluation, including ambiguity in the nature and purpose of evaluation, and a general feeling that the function of evaluation has already been served through existing quality mechanisms. Finally, the possible implications are considered of some or all of the steps in the evaluation process being undertaken by an evaluator working alongside the academic.
    LanguageEnglish
    Pages3-10
    Number of pages7
    JournalJournal of Educational Technology Society
    Volume5
    Issue number3
    Publication statusPublished - 2002

    Fingerprint

    evaluation
    evidence
    experience
    learning

    Keywords

    • evaluation
    • learning technology
    • academic roles
    • participant evaluation

    Cite this

    @article{4203cd63115e43b3a70ceac358343fde,
    title = "Towards effective practitioner evaluation: an exploration of issues relating to skills, motivation and evidence",
    abstract = "Although academics are increasingly expected to undertake studies of their practice, particularly where this involves the use of learning technology, experience to date suggests that meeting this expectation has proved difficult. This paper attempts to explain this difficulty. After reviewing literature that provides a rationale for practitioner evaluation, the experiences of three projects (EFFECTS, ASTER and SoURCE) which attempted to draw on this process are described. Three main areas of difficulty are discussed: the skills and motivations of the academics involved, and the kinds of evidence (and its analysis) that 'count' for a given evaluation. This discussion leads to the identification of a number of problems that inhibit practitioner evaluation, including ambiguity in the nature and purpose of evaluation, and a general feeling that the function of evaluation has already been served through existing quality mechanisms. Finally, the possible implications are considered of some or all of the steps in the evaluation process being undertaken by an evaluator working alongside the academic.",
    keywords = "evaluation, learning technology, academic roles, participant evaluation",
    author = "Jen Harvey and Martin Oliver and Janice Smith",
    year = "2002",
    language = "English",
    volume = "5",
    pages = "3--10",
    journal = "Journal of Educational Technology Society",
    issn = "1176-3647",
    number = "3",

    }

    Towards effective practitioner evaluation: an exploration of issues relating to skills, motivation and evidence. / Harvey, Jen; Oliver, Martin; Smith, Janice.

    In: Journal of Educational Technology Society, Vol. 5, No. 3, 2002, p. 3-10.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Towards effective practitioner evaluation: an exploration of issues relating to skills, motivation and evidence

    AU - Harvey, Jen

    AU - Oliver, Martin

    AU - Smith, Janice

    PY - 2002

    Y1 - 2002

    N2 - Although academics are increasingly expected to undertake studies of their practice, particularly where this involves the use of learning technology, experience to date suggests that meeting this expectation has proved difficult. This paper attempts to explain this difficulty. After reviewing literature that provides a rationale for practitioner evaluation, the experiences of three projects (EFFECTS, ASTER and SoURCE) which attempted to draw on this process are described. Three main areas of difficulty are discussed: the skills and motivations of the academics involved, and the kinds of evidence (and its analysis) that 'count' for a given evaluation. This discussion leads to the identification of a number of problems that inhibit practitioner evaluation, including ambiguity in the nature and purpose of evaluation, and a general feeling that the function of evaluation has already been served through existing quality mechanisms. Finally, the possible implications are considered of some or all of the steps in the evaluation process being undertaken by an evaluator working alongside the academic.

    AB - Although academics are increasingly expected to undertake studies of their practice, particularly where this involves the use of learning technology, experience to date suggests that meeting this expectation has proved difficult. This paper attempts to explain this difficulty. After reviewing literature that provides a rationale for practitioner evaluation, the experiences of three projects (EFFECTS, ASTER and SoURCE) which attempted to draw on this process are described. Three main areas of difficulty are discussed: the skills and motivations of the academics involved, and the kinds of evidence (and its analysis) that 'count' for a given evaluation. This discussion leads to the identification of a number of problems that inhibit practitioner evaluation, including ambiguity in the nature and purpose of evaluation, and a general feeling that the function of evaluation has already been served through existing quality mechanisms. Finally, the possible implications are considered of some or all of the steps in the evaluation process being undertaken by an evaluator working alongside the academic.

    KW - evaluation

    KW - learning technology

    KW - academic roles

    KW - participant evaluation

    UR - http://www.ifets.info/journals/5_3/harvey.pdf

    M3 - Article

    VL - 5

    SP - 3

    EP - 10

    JO - Journal of Educational Technology Society

    T2 - Journal of Educational Technology Society

    JF - Journal of Educational Technology Society

    SN - 1176-3647

    IS - 3

    ER -