The spatial scale information that mediates face identification, gender and expression

P. G. Schyns, Lizann Bonnar, F. Gosselin

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

It is now well established that face information is represented at multiple spatial scales. However, research on face recognition has so far lacked a technique that identifies the specific information that humans locally represent at different scales, for different face categorization tasks. To address this issue, we used the Bubbles technique of Gosselin and Schyns (in press) in three different categorization tasks (identity, gender, and expressive or not) of 20 face stimuli. To compute the experimental stimuli, we decomposed the original faces into 6 bands of spatial frequencies of one octave each-at 2.81, 5.62, 11.25, 22.5, 45 and 90 cycles per face, from coarse to fine, respectively. Information at each spatial frequency bandwidth was partially revealed by a number of randomly located Gaussian bubbles forming a mask (standard deviations of bubbles were 2.15, 1.08, .54, .27, and .13 deg of visual angle, from coarse to fine scales, to normalize to 3 the number of cycles per bubble revealed). To generate an experimental stimulus, we simply added the information revealed at each scale (the number of bubbles per image was automatically adjusted to reveal just enough face information to maintain a 75% correct categorization criterion). Three independent groups of 20 subjects and three ideal observers resolved a different categorization of the same faces (identification, gender, and expressive or not). For each spatial scale, Bubbles isolated the different information human and ideal observers used to resolve these categorizations. Using this information, we synthesized the effective stimuli of each task, with humans and ideals, for comparison purposes.

LanguageEnglish
JournalJournal of Vision
Volume1
Issue number3
DOIs
Publication statusPublished - 1 Dec 2001

Fingerprint

Masks
Identification (Psychology)
Research
Facial Recognition

Keywords

  • facial recognition
  • spatial scale
  • gender
  • expression

Cite this

@article{7816663849224b77839cb2084f95a660,
title = "The spatial scale information that mediates face identification, gender and expression",
abstract = "It is now well established that face information is represented at multiple spatial scales. However, research on face recognition has so far lacked a technique that identifies the specific information that humans locally represent at different scales, for different face categorization tasks. To address this issue, we used the Bubbles technique of Gosselin and Schyns (in press) in three different categorization tasks (identity, gender, and expressive or not) of 20 face stimuli. To compute the experimental stimuli, we decomposed the original faces into 6 bands of spatial frequencies of one octave each-at 2.81, 5.62, 11.25, 22.5, 45 and 90 cycles per face, from coarse to fine, respectively. Information at each spatial frequency bandwidth was partially revealed by a number of randomly located Gaussian bubbles forming a mask (standard deviations of bubbles were 2.15, 1.08, .54, .27, and .13 deg of visual angle, from coarse to fine scales, to normalize to 3 the number of cycles per bubble revealed). To generate an experimental stimulus, we simply added the information revealed at each scale (the number of bubbles per image was automatically adjusted to reveal just enough face information to maintain a 75{\%} correct categorization criterion). Three independent groups of 20 subjects and three ideal observers resolved a different categorization of the same faces (identification, gender, and expressive or not). For each spatial scale, Bubbles isolated the different information human and ideal observers used to resolve these categorizations. Using this information, we synthesized the effective stimuli of each task, with humans and ideals, for comparison purposes.",
keywords = "facial recognition, spatial scale, gender, expression",
author = "Schyns, {P. G.} and Lizann Bonnar and F. Gosselin",
year = "2001",
month = "12",
day = "1",
doi = "10.1167/1.3.339",
language = "English",
volume = "1",
journal = "Journal of Vision",
issn = "1534-7362",
number = "3",

}

The spatial scale information that mediates face identification, gender and expression. / Schyns, P. G.; Bonnar, Lizann; Gosselin, F.

In: Journal of Vision, Vol. 1, No. 3, 01.12.2001.

Research output: Contribution to journalArticle

TY - JOUR

T1 - The spatial scale information that mediates face identification, gender and expression

AU - Schyns, P. G.

AU - Bonnar, Lizann

AU - Gosselin, F.

PY - 2001/12/1

Y1 - 2001/12/1

N2 - It is now well established that face information is represented at multiple spatial scales. However, research on face recognition has so far lacked a technique that identifies the specific information that humans locally represent at different scales, for different face categorization tasks. To address this issue, we used the Bubbles technique of Gosselin and Schyns (in press) in three different categorization tasks (identity, gender, and expressive or not) of 20 face stimuli. To compute the experimental stimuli, we decomposed the original faces into 6 bands of spatial frequencies of one octave each-at 2.81, 5.62, 11.25, 22.5, 45 and 90 cycles per face, from coarse to fine, respectively. Information at each spatial frequency bandwidth was partially revealed by a number of randomly located Gaussian bubbles forming a mask (standard deviations of bubbles were 2.15, 1.08, .54, .27, and .13 deg of visual angle, from coarse to fine scales, to normalize to 3 the number of cycles per bubble revealed). To generate an experimental stimulus, we simply added the information revealed at each scale (the number of bubbles per image was automatically adjusted to reveal just enough face information to maintain a 75% correct categorization criterion). Three independent groups of 20 subjects and three ideal observers resolved a different categorization of the same faces (identification, gender, and expressive or not). For each spatial scale, Bubbles isolated the different information human and ideal observers used to resolve these categorizations. Using this information, we synthesized the effective stimuli of each task, with humans and ideals, for comparison purposes.

AB - It is now well established that face information is represented at multiple spatial scales. However, research on face recognition has so far lacked a technique that identifies the specific information that humans locally represent at different scales, for different face categorization tasks. To address this issue, we used the Bubbles technique of Gosselin and Schyns (in press) in three different categorization tasks (identity, gender, and expressive or not) of 20 face stimuli. To compute the experimental stimuli, we decomposed the original faces into 6 bands of spatial frequencies of one octave each-at 2.81, 5.62, 11.25, 22.5, 45 and 90 cycles per face, from coarse to fine, respectively. Information at each spatial frequency bandwidth was partially revealed by a number of randomly located Gaussian bubbles forming a mask (standard deviations of bubbles were 2.15, 1.08, .54, .27, and .13 deg of visual angle, from coarse to fine scales, to normalize to 3 the number of cycles per bubble revealed). To generate an experimental stimulus, we simply added the information revealed at each scale (the number of bubbles per image was automatically adjusted to reveal just enough face information to maintain a 75% correct categorization criterion). Three independent groups of 20 subjects and three ideal observers resolved a different categorization of the same faces (identification, gender, and expressive or not). For each spatial scale, Bubbles isolated the different information human and ideal observers used to resolve these categorizations. Using this information, we synthesized the effective stimuli of each task, with humans and ideals, for comparison purposes.

KW - facial recognition

KW - spatial scale

KW - gender

KW - expression

UR - http://www.scopus.com/inward/record.url?scp=4243160790&partnerID=8YFLogxK

U2 - 10.1167/1.3.339

DO - 10.1167/1.3.339

M3 - Article

VL - 1

JO - Journal of Vision

T2 - Journal of Vision

JF - Journal of Vision

SN - 1534-7362

IS - 3

ER -